AI Panic is Baby's First Colonialism

Western men are grappling with their first experience of colonialism in the only way they know how – assuming they're the first ones here!

By now, we’re all fairly familiar with the AI Doomer cult, along with its associated faces and talking points. Day after day, the AI hype cycle is consistently overrun by open letters, interviews, speculative fiction, and late-stage career crises, all concerned with a supposed AI “apocalypse”. I won’t aim here to debunk and dispel the central arguments held up by that movement’s biggest leaders or question the toxic ideological roots of the movement itself, nor will this be an indictment of the virtues of AI “safety” research versus AI “ethics” research. Instead, this is an opportunity to engage in a long-overdue contextualization: what, precisely, is it that fuels today’s explosive interest in preventing a so-called AI apocalypse? In a nutshell, I believe that some Western men are grappling with what is likely their first experience of colonialism, in the only way they know how: assuming they’re the first ones here!

Some background on AI & Colonialism

Framing AI through a lens of colonialism is not a novel idea at this point. We have long been aware of the historical relationships between technical or scientific progress and colonial power structuresAs illustrated by "American Progress"., so the notion that the AI industry would (inadvertently or intentionally) oppress historically marginalized peoples should not be a surprise. What seems to have been a bit more unexpected for some people, however, is that AI would so quickly pose a perceived threat to white-collar professions and the creative arts – bastions of Western economy and culture.

It was clear from the start of the “Big Data” approach to AI that pursuing this technology at scale would only proceed through an extended campaign of extraction and exploitation. You may at this point be familiar with the contentious use of scraped data in large language models, copyrighted work in generative art models, obscene levels of carbon emissions from model training, and the embarrassingly underpaid data labellers who make all AI work possible in the first place.

Beyond this, our big tech companies are famously allergic to paying the bare minimum in taxesListen to David Moscrop on Paris Marx's podcast, Tech Won't Save Us, at ~21:00, throw tantrums at even the hint of regulation, and have been the beneficiaries of a staggering volume of subsidies. Without this free capital and preferential treatment, AI research and development wouldn’t be anywhere near where it is today. In return, it seems that the tech giants are competing to be the first to topple our (already brittle and underfunded) systems of education, healthcare, and arts.

A large contributor to AI hype is a fear of falling behind. We all love making fun of influencers on LinkedIn, but we do also have to continue to recognize that companies like OpenAI actively encourage the panic. Tech companies want you to think that it might all be a “little too fast and a bit too cool and tbh not sure you can handle it bro”, so that regulation can be left to the technocrats while they warn you that what’s coming over the horizon will leave your world looking unrecognizable – from your professional life and your social life, all the way down to your leisure and creative pursuits.

Let’s sum it up! Massive, poorly regulated corporations used landSee https://www.thegreenwebfoundation.org/news/the-politics-of-data-centers/, capitalSee https://www.theguardian.com/business/2021/may/31/silicon-six-tech-giants-accused-of-inflating-tax-payments-by-almost-100bn and cultural artifacts, exploited labour, and circumvented local laws in order to grow rich and develop their products, all of which are now poised to disrupt traditional ways of living, siphon resources from other projects, put pressure on the most vulnerable members of our societies, and all but gut our modes of creative expression. I hate to say it, but this sounds like colonialism to me!Colonialism isn't just when it happens to brown people! Go to https://en.wikipedia.org/wiki/Colonialism to start learning more about the various kinds of colonialism.

OK doomer

What’s the point that I’m trying to make here? If I were to be charitable, I would say that the leaders of the AI Dommerism/Risk/Safety/whatever-you-want-to-call-it movement have correctly identified a problem with AI being a potential force of oppression, and some proponents even allude to fears of colonialism as part of their concerns. They just aren’t the first ones to see it, and their approach to contending with it seems very… amateur? When your Hintons, Russells, Bostroms, and Yudkowskys see Western cultureI'm borrowing some reactionary panic from Jordan Peterson here, e.g. https://www.youtube.com/watch?v=ziurppCPfEg. threatened by the tech world’s hubris, they fail to diagnose the actual problem! How can one even begin to discuss “existential risk” without working with communities who have historically faced oppression and colonialism? Of course they would react by centering themselves, over-intellectualizingSee LessWrong at your own risk., and developing their own literature!. Working in an ideological vacuum and focusing on apocalypse scenarios sidelines addressing patterns of colonialism, which would require valuing the contributions of Indigenous peoples, Black people, people of colour worldwide, women and non-binary people, queer people, disabled people, people of different socioeconomic status, et ceteraH/t to Dr. Sasha Luccioni: "Me, waiting for Hinton to cite anyone who's not a white man in his talk 💀" – something the STEM world is not known for doing.

This infographic has been making the rounds on Twitter:

Let’s call a spade a spade, eh?

In chatting with Aviya Skowron, I think they hit the nail on the head when they characterized the AI Doomer thought process as, “Let me derive this from first principles.” The folks leading the movement have a child’s understanding of what oppression is and an inability to admit that they are woefully unsuited to be the subject matter experts here. In future work I will draw a stronger link between common AI Doomerism arguments and White reactions to colonialism, but for now I’ll conclude with my own 22-word statement.

Addressing the issues posed by Artificial Intelligence will require turning to the readily available wealth of scholarship and expertise on resisting colonialism.