I’ve been trying not to write about AI, but it follows you around at the moment. And certainly the recent “statement” in which 1,000 tech leaders, academics, and others say at telegrammatic length that they’re concerned about the long-term risks of AI is a mysterious thing.
AI’s a noisy area right now, so just to be clear: this statement is not the same thing as the more specific open letter that was signed by Elon Musk, Sam Altman, and others back in March. That was headlined: “We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
All 22 words
In case you’ve not being paying attention, here’s the statement in full: all twenty-two words of it:
“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”
When you look at what it says, and who has signed it, it reads a bit like the bloke outside a bar who is desperate not to get into a fight, yelling “Hold me back, hold me back”.

Someone must act
The first obvious objection to the statement is that most of the people who have signed the statement have between them the resources and the social capital to organise the process of taking such claimed risks more seriously.
Meredith Whittaker, squeezed out of Google for raising ethical issues, and now President of the Signal Foundation, made this point in Fast Company:
“Let’s be real: These letters calling on ‘someone to act’ are signed by some of the few people in the world who have the agency and power to actually act to stop or redirect these efforts,” Whittaker says. “Imagine a U.S. president putting out a statement to the effect of ‘would someone please issue an executive order?’”
Myopic
The second objection is the question of whether it’s true or not. There are plenty of computer scientists out there who say that the prospects of an “extinction-level risk” from AI are non-existent, unless we’re going to die out as a species under the weight of terrible prose.
The third is that the list of “societal scale risks” that AI is suddenly ranged alongside—pandemics, nuclear war—seems myopic given the risks posed by climate change and biodiversity. The effective altruists have a blind spot about this as well, but it makes you think that the tech bros need to get out more.
I’m generally a believer in the principle of Occam’s Razor, which says that you should prefer simpler explanations to more complex ones. So I think it’s likely that the technology community (a) does believe this; (b) that culturally they’re not attuned to listening to voices outside of their groupthink; and that (c) this was the most complicated statement they could agree on.
Distracting attention
But it’s also worth noting that there are plenty of people out there who have suggested more cynical explanations. The easiest way to summarise this is that by mentioning “the risk of extinction” it’s a kind of “look over there” strategy, distracting attention from potential real short-run harms from AI.
Fast Company quoted a tweet from the University of Washington law professor Ryan Calo:
“If AI threatens humanity, it’s by accelerating existing trends of wealth and income inequality, lack of integrity in information, and exploiting natural resources.”
Real harms
And Meredith Whittaker makes a related point in the article:
Whittaker believes that such statements focus the debate on long-term threats that future AI systems might pose while distracting from the discussion about how very real harms that current AI systems can cause— worker displacement , copyright infringement , and privacy violations , to name just a few issues. Whittaker points out that a significant body of research already shows the harms of AI in the near term—and the need for regulation.
At his blog, summarising his Bloomberg column, the economist Tyler Cowen suggested that whatever the arguments about the qualities of the statement, it was terrible politics. In one line, his argument is:
Sometimes publicity stunts backfire.
Narrow representation
He has several notes on this. The first is the risk of using words like “extinction”, which attract the attention of the security establishment.
The second, as discussed above, is that many of the people who have signed the statement are influential enough to agree an industry-wide code about managing the perceived risks they see. (This isn’t a system like the finance sector where you need someone else to turn off the music so that everyone stops dancing.)
The third is the narrowness of the representation—mostly California and Seattle, with a flavour from Toronto and the UK. If you’re trying to build a political campaign, you’d need to be a bit broader.
Then, there’s the brevity of the statement itself, which opens up the question of, “Is that all there is?”:
Perhaps this is a bold move, and it will help stimulate debate and generate ideas. But an alternative view is that the group could not agree on anything more. There is no accompanying white paper or set of policy recommendations.
Figuring it out
And finally, because the statement is so short, and because there is nothing in the way of supporting documents, it undermines the credibility of the statement. In most areas where industry and technical experts agree that something is an issue, they can also agree—at the very least—on some first steps to take to do something about it:
If some well-known and very smart players in a given area think the world might end but make no recommendations about what to do about it, might you decide just to ignore them altogether? (“Get back to me when you’ve figured it out!”) What if a group of scientists announced that a large asteroid was headed toward Earth. I suspect they would have some very specific recommendations, on such issues as how to deflect the asteroid and prepare defenses.
In contrast: the earlier Open Letter went on for pretty much most of two pages, and did also come with a white paper’s worth of policy statements, even if they were published slightly later.
But this time around, not so much. Here we have a big chicken-licken type statement (“the sky is falling down”), but we don’t get even another 22 words on where to start dealing with it. Chicken-licken? Well, it turns out that chicken-licken’s situational analysis wasn’t good and it didn’t turn out well for the poor creature, or for most of their friends.
