AI regulation must consider the harms and the benefits of the technology. Harms that regulation might seek to legislate against include the potential for AI to discriminate against disadvantaged communities and the development of autonomous weapons. But the benefits of AI, like a better cancer diagnosis and smart climate modeling, might not exist if AI regulation were too heavy-handed. Sensible AI regulation would maximize its benefits and mitigate its harms.
But with the US competing with China and Russia to achieve “AI supremacy” – a clear technological advantage over rivals – regulations have thus far taken a back seat. This, according to the UN, has thrust us into “unacceptable moral territory“.
Read more: Does regulating artificial intelligence save humanity or stifle innovation?
AI researchers and governance bodies, such as the EU, have called for urgent regulations to prevent the development of unethical AI. Yet the EU’s white paper on the issue has acknowledged that it’s difficult for governance bodies to know which AI race will end with unethical AI and which will end with beneficial AI.
The UN is concerned that autonomous weapons, like those featured in the Terminator franchise, are being developed. USA-Pyon/Shutterstock
We wanted to know which AI races should be prioritized for regulation, so our team created a theoretical model to simulate hypothetical AI races. We then ran this simulation in hundreds of iterations, tweaking variables to predict how real-world AI races might pan out.
Our model includes a number of virtual agents representing competitors in an AI race – like different technology companies, for example. Each agent was randomly assigned a behavior, mimicking how these competitors would behave in a real AI race. For example, some agents carefully consider all data and AI pitfalls, but others take undue risks by skipping these tests.
The model itself was based on evolutionary game theory, which has been used in the past to understand how behaviors evolve on the scale of societies, people, or even our genes. The model assumes that winners in a particular game – in our case, an AI race – take all the benefits, as biologists argue happens in evolution.
By introducing regulations into our simulation – sanctioning unsafe behavior and rewarding safe behavior – we could then observe which regulations were successful in maximizing benefits and which ended up stifling innovation.
The variable we found to be particularly important was the “length” of the race – the time our simulated races took to reach their objective (a functional AI product). When AI races went their objective quickly, we found that competitors whom we’d coded always to overlook safety precautions alw.
In these quick AI races, or “AI sprints,” the competitive advantage is gained by being speedy, and those who pause to consider safety and ethics always lose out. It would make sense to regulate these AI sprints so that the AI products they conclude with are safe and ethical.
On the other hand, our simulation found that long-term AI projects, or “AI marathons,” require regulations less urgently. That’s because the winners of AI marathons weren’t always those who overlooked safety. Plus, we found that regulating AI marathons prevented them from reaching their potential. This looked like stifling over-regulation – the sort that could actually work against society’s interests.
Read more: AI’s current hype and hysteria could set the technology back by decades.
Given these findings, it’ll be important for regulators to establish how long different AI races are likely to last, applying different regulations based on their expected timescales. Our findings suggest that one rule for all AI races – from sprints to marathons – will lead to some outcomes that are far from ideal.
It’s not too late to put together smart, flexible regulations to avoid unethical and dangerous AI while supporting AI that could benefit humanity. But such rules may be urgent: our simulation suggests that those AI races that are due to end soon will be the most important to regulate.