The second post from my Just Two Things newsletter on Artificial Intelligence.

Yesterday’s post here mentioned Google’s removal of its leading AI ethics researchers over an academic paper on which they were co-authors—and that this had turned a welcome spotlight on the subject. The paper is now available, and there’s a good summary by John Naughton in one of his Observer columns.
The paper provides a useful critical review of machine-learning language models (LMs) like GPT-3, which are trained on enormous amounts of text and are capable of producing plausible-looking prose. The amount of computation (and associated carbon emissions) involved in their construction has ballooned to insane levels, and so at some point it’s sensible to ask the question that is never asked in the tech industry: how much is enough?
Costs and risks
The paper itself is academic, reasonably concise, and not too technical, at least once you get through the acronyms at the front. The conclusion summarises the argument in this way:
We have identified a wide variety of costs and risks associated with the rush for ever larger LMs, including: environmental costs (borne typically by those not benefiting from the resulting technology); financial costs, which in turn erect barriers to entry, limiting who can contribute to this research area and which languages can benefit from the most advanced techniques; opportunity cost, as researchers pour effort away from directions requiring less resources; and the risk of substantial harms, including stereotyping, denigration, increases in extremist ideology, and wrongful arrest.
The speech of the privileged
The ethics issues of AIs and their related machine learning sets have been reasonably well covered elsewhere. We know that they replicate the bias of the language sets they learn on, and that these language sets are more likely to reflect the speech of those who are already privileged, and de-privilege the already marginalised. The language models “run the risk of ‘value-lock’, where the LM-reliant technology reifies older, less-inclusive understandings.” There is a detailed discussion of the multiple layers involved in this process in the paper, and a memorable quote:
We recall again Birhane and Prabhu’s words (inspired by Ruha Benjamin): “Feeding AI systems on the world’s beauty, ugliness, and cruelty, but expecting it to reflect only the beauty is a fantasy.”
Documentation debt
The immense scale is also an issue. The paper notes the risk of “documentation debt”, where sheer size means that the datasets that sit behind the machine learning are both undocumented and also too large to be documented afterwards:
While documentation allows for potential accountability, undocumented training perpetuates harm without recourse. Without documentation, one cannot try to understand training data characteristics in order to mitigate some of these a test of issues, or even unknown ones.
A one-billion dollar training bill
The cost of a big training set has been less well covered. Exponential View linked to a trade industry article that said that in five years time a large language set could well cost $1 billion to train. If that seems a lot, the sums suggest that the hardware alone to drive one of these big language models could easily be in the $150-170 million dollar range.
And if that still seems high: some researchers who’d been trying separately to model the same costs tweeted that they’d reached a figure of a billion dollars—but then decided that the number seemed far too high, and so they fudged their numbers back down to $100 million. It was honest of them to own up.

Ecological issues
Of course, this price tag suggests to me that only certain people are going to play in this space—and that they will be under strong pressures to monetise their investment, which may mean that ethics vanish pretty quickly.
On environmental impact, The Register reported last year that the ecological impact of training an AI set had increased by a factor of 300,000 between 2012 and 2018. (This seems high to me, but I can’t get access to the original article to check it). The same researchers estimated the carbon cost of training the GPT-3 AI as being 85 tonnes, or the equivalent of driving a car to the moon and back. The size of LMs continues to grow quickly—so more recent learning models are likely to have a much larger footprint.

Ethical issues
As Naughton observes, succinctly: “Current machine-learning systems have ethical issues the way rats have fleas.”
What I took away from the paper, although they don’t say this, is that the public harm from these large language models seems to be spiralling, but with little likelihood of comparable public benefit. And this now does seem to be spilling over into the space of activism. John Naughton, again, notes that there is now
”a student campaign – #recruitmenot – aimed at persuading fellow students from joining tech companies. After all, if they wouldn’t work for a tobacco company or an arms manufacturer, why would they work for Google or Facebook?”
Incidentally, if you are wondering why the former Google researcher Margaret Mitchell is credited as Shmargaret Shmitchell on the paper, it is because Google prevented her from using her real name on the paper. Enough said, probably.