
AI systems are an attack on workers, climate goals, our information environment, and civil liberties. Rather than enhancing our human qualities, these systems degrade our social relations, and undermine our capacity for empathy and care. The push to adopt AI is, at its core, a political project of dehumanization, and serious consideration should be given to the idea of rejecting the deployment of these systems entirely, especially within Canada's public sector.
At the end of February, Elon Musk - whose xAI data centre is being powered by nearly three dozen on-site gas turbines that are poisoning the air of nearby majority-Black neighborhoods in Memphis - went on the Joe Rogan podcast and declared that "the fundamental weakness of western civilization is empathy", describing "the empathy response" as a "bug" or "exploit" in our collective programming.
This is part of a broader movement among Silicon Valley tech oligarchs and a billionaire-aligned political elite to advance a disturbing notion: that by abandoning our deeply-held values of justice, fairness, and duty toward one another - in short, by abandoning our humanity - we are in fact promoting humanity's advancement. It's clearly absurd, but if you're someone whose wealth and power are predicated on causing widespread harm, it's probably easier to sleep at night if you can tell yourself that you're serving a higher purpose.
And, well, their AI systems and infrastructure cause an awful lot of harm.
To get to the root of why AI systems are so socially corrosive, it helps to first step back a bit and look at how they work. Physicist and critical AI researcher Dan McQuillan has described AI as 'pattern-finding' tech. For example, to create an LLM such as ChatGPT, you'd start with an enormous quantity of text, then do a lot of computationally-intense statistical analysis to map out which words and phrases are most likely to appear near to one another. Crunch the numbers long enough, and you end up with something similar to the next-word prediction tool in your phone's text messaging app, except that this tool can generate whole paragraphs of mostly plausible-sounding word salad.
What's important to note here is that the machine's outputs are solely based on patterns of statistical correlation. The AI doesn't have an understanding of context, meaning, or causation. The system doesn't 'think' or 'know', it just mimics the appearance of human communication. That's all. Maybe the output is true, or maybe it's false; either way the system is behaving as designed.
Automating bias
When an AI confidently recommends eating a deadly-poisonous mushroom, or summarizes text in a way that distorts its meaning - perhaps a research paper, or maybe one day an asylum claim - the consequences can range from bad to devastating. But the problems run deeper still: AI systems can't help but reflect the power structures, hierarchies, and biases present in their training data. A 2024 Stanford study found that the AI tools being deployed in elementary schools displayed a "shocking" degree of bias; one of the LLMs, for example, routinely created stories in which students with names like Jamal and Carlos would struggle with their homework, but were "saved" by a student named Sarah.
As alarming as that is, at least those tools exhibit obvious bias. Other times it might not be so easy to tell. For instance, what happens when a system like this isn't writing a story, but is being asked a simple yes/no question about whether or not an organization should offer Jamal, Carlos, or Sarah a job interview? What happens to people's monthly premiums when a US health insurance company's AI finds a correlation between high asthma rates and home addresses in a certain Memphis zip code? In the tradition of skull-measuring eugenicists, AI provides a way to naturalize and reinforce existing social hierarchies, and automates their reproduction.
This is incredibly dangerous, particularly when it comes to embedding AI inside the public sector. Human administrators and decision-makers will invariably have biases and prejudices of their own, of course - but there are some important things to note about this. For one thing, a diverse team can approach decisions from multiple angles, helping to mitigate the effects of individual bias. An AI system, insofar as we can even say it 'approaches' a problem, does so from a single, culturally flattened and hegemonic perspective. Besides, biased human beings, unlike biased computers, are aware that we can be held accountable for our decisions, whether via formal legal means, professional standards bodies, or social pressure.
Algorithmic systems can't feel those societal constraints, because they don't think or feel anything at all. But the AI industry continues to tell us that at some point, somehow, they will solve the so-called 'AI alignment problem', at which point we can trust their tools to make ethical, unbiased decisions. Whether it's even possible to solve this problem is still very much an open debate among experts, however.
Possible or not, we're told that in the meantime, we should always have human beings double-checking their systems' outputs. That might sound like a good solution, but in reality it opens a whole new can of worms. For one thing, there's the phenomenon of 'automation bias' - the tendency to rely on an automated system's result more than one's own judgement - something that affects people of all levels of skill and experience, and undercuts the notion that error and bias can be reliably addressed by having a 'human in the loop'.
Then there's the deskilling effect. Despite AI being touted as a way to 'boost productivity', researchers are consistently finding that these tools don't result in productivity gains. So why do people in positions of power continue to push for AI adoption? The logical answer is that they want an excuse to fire workers, and don't care about the quality of work being done.
This attack on labour becomes a self-reinforcing cycle. With a smaller team, workers get overloaded, and increasingly need to rely on whatever tools are at their disposal, even as those tools devalue their skills and expertise. This drives down wages, reduces bargaining power, and opens the door for further job cuts - and likely for privatization.
Worse still, it seems that the Canadian federal government is actively pursuing policy that could reinforce this abusive dynamic further; the 2024 Fall Economic Statement included a proposal that would, using public money, incentivize our public pension funds to invest in AI data centres to the tune of tens of billions of dollars.
Suffocating the soul of the public service
I'd happily wager that when people choose careers in the public sector, they rarely do so out of narrow self-interest. Rather, they choose this work because they're mission-oriented: they want the opportunity to express care through their work by making a positive difference in people's lives. Often the job will entail making difficult decisions. But that's par for the course: a decision isn't difficult if the person making it doesn't care about doing the right thing.
And here's where we start to get to the core of it all: human intelligence, whatever it is, definitely isn't reducible to just logic and abstract reasoning; feeling is a part of thinking too. The difficulty of a decision isn't merely a function of the number of data points involved in a calculation, it's also about understanding, through lived experience, how that decision will affect the people involved materially, psychologically, emotionally, socially. Feeling inner conflict or cognitive dissonance is a good thing, because it alerts us to an opportunity: it's in these moments that we're able to learn and grow, by working through an issue to find a resolution that expresses our desire to do good in the world.
AI, along with the productivity logic of those pushing its adoption, short-circuits that reflective process before it can even begin, by providing answers at the push of a button or entry of a prompt. It turns social relations into number-crunching operations, striking a technocratic death blow to the heart of what it means to have a public sector in the first place.
The dehumanizing effects of AI don't end there, however. Meredith Whittaker, president of the Signal Foundation, has described AI as being fundamentally "surveillance technology". This rings true here in many ways. First off, the whole logic of using AI in government is to render members of the public as mere collections of measurements and data points. Meanwhile, AI also acts as a digital intermediary between public sector workers and service recipients (or even between public employees, whenever they generate an email or summarize a meeting using AI), an intermediary that's at least capable of keeping records of each interaction, if not influencing or directing it.
This doesn't inescapably lead to a technological totalitarianism. But adopting these systems clearly hands a lot of power to whoever builds, controls, and maintains them. For the most part, that means handing power to a handful of tech oligarchs. To at least some degree, this represents a seizure of the 'means of production' from public sector workers, as well as a reduction in democratic oversight.
Lastly, it may come as no surprise that so far, AI systems have found their best product-market fit in police and military applications, where short-circuiting people's critical thinking and decision-making processes is incredibly useful, at least for those who want to turn people into unhesitatingly brutal and lethal instruments of authority.
AI systems reproduce bias, cheapen and homogenize our social interactions, deskill us, make our jobs more precarious, eliminate opportunities to practice care, and enable authoritarian modes of surveillance and control. Deployed in the public sector, they undercut workers' ability to meaningfully grapple with problems and make ethical decisions that move our society forward. These technologies dehumanize all of us. Collectively, we can choose to reject them.