This article is the result of a collaboration between philosopher Seth Lazar, AI impacts researcher Arvind Narayanan, and fast.ai’s Jeremy Howard. At fast.ai we believe that planning for our future with AI is a complex topic and requires bringing together cross-disciplinary expertise.

This is the year extinction risk from AI went mainstream. It has featured in leading publications, been invoked by 10 Downing Street, and mentioned in a White House AI Strategy document. But a powerful group of AI technologists thinks it still isn’t being taken seriously enough. They have signed a statement that claims: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

“Global priorities” should be the most important, and urgent, problems that humanity faces. 2023 has seen a leap forward in AI capabilities, which undoubtedly brings new risks, including perhaps increasing the probability that some future AI system will go rogue and wipe out humanity. But we are not convinced that mitigating this risk is a global priority. Other AI risks are as important, and are much more urgent.

Start with the focus on risks from AI. This is an ambiguous phrase, but it implies an autonomous rogue agent. What about risks posed by people who negligently, recklessly, or maliciously use AI systems? Whatever harms we are concerned might be possible from a rogue AI will be far more likely at a much earlier stage as a result of a “rogue human” with AI’s assistance.

Indeed, focusing on this particular threat might exacerbate the more likely risks. The history of technology to date suggests that the greatest risks come not from technology itself, but from the people who control the technology using it to accumulate power and wealth. The AI industry leaders who have signed this statement are precisely the people best positioned to do just that. And in calling for regulations to address the risks of future rogue AI systems, they have proposed interventions that would further cement their power. We should be wary of Prometheans who want to both profit from bringing the people fire, and be trusted as the firefighters.

And why focus on extinction in particular? Bad as it would be, as the preamble to the statement notes AI poses other serious societal-scale risks. And global priorities should be not only important, but urgent. We’re still in the middle of a global pandemic, and Russian aggression in Ukraine has made nuclear war an imminent threat. Catastrophic climate change, not mentioned in the statement, has very likely already begun. Is the threat of extinction from AI equally pressing? Do the signatories believe that existing AI systems or their immediate successors might wipe us all out? If they do, then the industry leaders signing this statement should immediately shut down their data centres and hand everything over to national governments. The researchers should stop trying to make existing AI systems safe, and instead call for their elimination.

We think that, in fact, most signatories to the statement believe that runaway AI is a way off yet, and that it will take a significant scientific advance to get there—one that we cannot anticipate, even if we are confident that it will someday occur. If this is so, then at least two things follow.

First, we should give more weight to serious risks from AI that are more urgent. Even if existing AI systems and their plausible extensions won’t wipe us out, they are already causing much more concentrated harm, they are sure to exacerbate inequality and, in the hands of power-hungry governments and unscrupulous corporations, will undermine individual and collective freedom. We can mitigate these risks now—we don’t have to wait for some unpredictable scientific advance to make progress. They should be our priority. After all, why would we have any confidence in our ability to address risks from future AI, if we won’t do the hard work of addressing those that are already with us?

Second, instead of alarming the public with ambiguous projections about the future of AI, we should focus less on what we should worry about, and more on what we should do. The possibly extreme risks from future AI systems should be part of that conversation, but they should not dominate it. We should start by acknowledging that the future of AI—perhaps more so than of pandemics, nuclear war, and climate change—is fundamentally within our collective control. We need to ask, now, what kind of future we want that to be. This doesn’t just mean soliciting input on what rules god-like AI should be governed by. It means asking whether there is, anywhere, a democratic majority for creating such systems at all.

And we should focus on building institutions that both reduce existing AI risks and put us in a robust position to address new ones as we learn more about them. This definitely means applying the precautionary principle, and taking concrete steps where we can to anticipate as yet unrealised risks. But it also means empowering voices and groups underrepresented on this AI power list—many of whom have long been drawing attention to societal-scale risks of AI without receiving so much attention. Building on their work, let’s focus on the things we can study, understand and control—the design and real-world use of existing AI systems, their immediate successors, and the social and political systems of which they are part.