The opinions expressed in this blog post are mine and mine alone, and do not reflect the opinions or policies of my employer.
I recently started working at a software company, and I was surprised at how ubiquitous AI was. Nearly everyone uses it, but nobody seems to be aware of the problems it causes. So, I've decided to write this blog post, and discuss the various issues with AI as it is currently, ranging from verifiable problems that exist today (with sources!), to the personal opinions and thoughts of myself and others.
A few disclaimers before we start:
First, when I use the term "AI", I am referring specifically to LLMs, image generators, or other forms of machine learning algorithms that are referred to as "AI" in common parlance. I am not referring to Artificial Intelligence as a concept, Artificial General Intelligence (AGI), or machine learning in general.
Second, I am not strictly against AI. In fact, I think it is an incredibly useful tool in certain contexts. My stance is briefly summed up by saying, "I am not against AI, I am against the business practices of the companies producing AI, and the practices of the companies and governments that use AI."
Finally, if you are an individual who uses AI, please do not interpret this as an attack against you: I do not know your circumstances or reasons for using AI, and therefore, I cannot fairly judge your use (or non-use) of it. This post is intended to be informational, not judgmental.
Having said that, let's get started!
Energy Use
Asking ChatGPT a question uses approximately 10x as much energy as a traditional search engine (this excludes AI-supplemented search engines, like Google's "AI Overview" which incurs the energy use of a traditional search query plus the energy use of AI summarization). Indeed, AI uses so much electricity that the U.S. government wants to build more nuclear reactors.
As it stands currently, AI represents a significant increase in energy usage, and most of that energy is extracted from fossil fuels. We know that burning fossil fuels is detrimental to the environment, and considering that we all live in the environment, we should be motivated to take care of it.
In some instances, AI data centres are powered completely by renewable energy. Unfortunately, unless the data centre builds new renewable energy sources to supply 100% of its power, this still results in an increase in fossil fuel usage. Why? Because if the data centre is using pre-existing sources for renewable energy, it is taking that energy away from other consumers that need it. To pick up the slack, we must generate more energy, and most of that generation is done using coal or natural gas.
Additionally, many residents who live near AI data centres are seeing increased electricity bills due to the amount of electricity those data centres use. That's right, trillion dollar companies are using you to subsidize their power bill!
Water Use
AI uses a lot of water. This cannot be understated. It is estimated that AI alone will use between 4 and 7 billion cubic metres of water per year in 2027, and the amount of water AI uses is expected to increase. To put that number into perspective, on average someone in the U.S. uses 292 cubic metres of water per year. AI's water consumption will be equivalent to the consumption of 24 million people - that's 24 million people who didn't have to worry about getting fresh water before, who will have to worry about getting fresh water very soon.
This water is needed for cooling the components used in data centres. These data centres are in direct competition with local residents (who also need fresh water), and, like with electricity, residents who live near data centres are seeing increases in their water bill. To make matters worse, most new data centres are built in areas that already have high water stress.
Overall, these two issues (water & electricity) compound to increase cost of living in an economy where most people are already struggling, and we haven't even considered the knock-on effects of the water scarcity that this will induce.
Data Centres Don't Actually Create (many) Jobs
Many politicians and CEOs are claiming that AI development, such as building AI data centres, will be a boon to local economies are create lots of jobs. However, this is not strictly true: Building data centres results in a large boom in construction, but once the data centre is built, it typically employs a very small number of people. Communities are mislead into spending large amounts of money to build these data centres, and are essentially left holding the bag when the promise of "more jobs" falls through.
Mass Surveillance
History has shown us that mass surveillance is bad. Unfortunately, numerous governments are actively using AI to enable mass surveillance on an unprecedented scale. Some notable organizations include:
- The Federal Government of the United States, which recently attempted to strongarm Anthropic into allowing their AI to be used for mass surveillance.
- The Government of the United Kingdom, which has plans to use AI to "catch criminals before they strike." (sound familiar?)
- The Government of the People's Republic of China, which has the most advanced and ubiquitous AI-powered surveillance system.
- The Government of India, which uses AI for "predictive policing."
- The Federal Government of Russia, which uses AI to identify and track political dissidents.
I'm sure there are more examples that I'm missing here (in fact, I would wager that every state on the planet is interested in the surveillance applications of AI), but this is everything I could find with good sources.
While government surveillance is nothing new, what is new is the capabilities AI brings for capturing and analyzing surveillance data. What used to take hundreds of person-hours to analyze can now be analyzed much faster, with much less humans involved. What used to be reserved for "known dissidents" or "persons of interest" can now be used on everyone, just in case.
Military Use
As with most new technologies, the military industrial complex is interested in the applications of AI. The U.S. government openly wants to transition to a largely "AI first" approach for the Department of Defense, claiming that this transition "is a race". This has one glaring issue: AI has not yet been proven as a reliable source for data analysis (which is what the DoD primarily wants to use it for). Now, this definitely falls into the realm of opinion, but I believe that any technology that will influence decisions about whether or not people will live or die should be proven to work reliably. In other words, we should not be racing into adopting AI for military use. At best, this is irresponsible, and at worst, it could conceivably result in the loss of billions of innocent lives. (see: Nuclear Armageddon)
Even if this doesn't result in the actual end of the world, (which admittedly is very unlikely) it is very likely that this will result in innocent people dying, who otherwise wouldn't have.
Adult Content
Similar to the military, the adult industry is also very interested in how they can adopt AI. AI-generated pornography is becoming more and more common, and this is presenting many ethical and moral issues. AI is being used to make porn more addictive, create non-consensual explicit deepfakes, and create CSAM. These things are all bad.
Copyright Infringement
AI must be trained on large amounts of data. It is estimated that ChatGPT 4 was trained on over 1 petabyte of data (for reference, that's approximately 60 million selfies) and Anthropic destroyed millions of real books to train Claude. Unfortunately, most of this data is the intellectual property of other people. In fact, every large AI model today is trained from significant amounts of copyrighted data. AI companies like OpenAI or Anthropic, who are valued at billions of dollars, are profiting from the work of other people, and those other people are receiving nothing in compensation.
AI Psychosis
"AI Psychosis" is when a person gains new psychotic symptoms from AI use. While this is not a clinical diagnosis and is largely unresearched, it is a real problem that worsens pre-existing mental illnesses and can even induce delusions in otherwise healthy individuals. AI has been attributed to the death of many people, primarily by suicide. (I won't go super into detail here, but Appendix 1 of the cited source has several cases of AI psychosis)
While the cause of AI psychosis is not known, it is speculated that it is related to the sycophantic nature of LLMs, that is, their tendency to confirm the user's beliefs under any circumstance.
A Society Where Nobody Works
Many notable figures have spoken about how AI could enable a post-work society. A "post-work society" is one where labour is entirely optional, or even a luxury: no human being will have to work, and can dedicate their lives pursuing what they want most, because AI will take care of all the necessities. Sounds good, right?
However, nobody - no person, no government, no organization - has agreed on how society will work once labour is no longer required. Currently, in most parts of the world, you need money in order to live, and you must work in order to earn money. So if AI started doing all of the work... AI would have all the money, and the humans would all be poor! Clearly, this isn't tenable. So what do we do instead? Many things have been suggested, such as UBI, communism, or techno-feudalism, but as I said before, nobody has agreed on what we should do. No government has a plan for how they will transition to a post-work society.
If we have powerful organizations trying in earnest to create a post-work society, we should have a plan for how the post-work society will function first, before we actually create one.
A Society Where Nobody Learns
AI is already negatively impacting how children learn, causing issues with cognitive, social, and emotional development. This is already bad. What's worse, however, is that if we don't curb AI usage by students in schools, we will end up with a generation of adults who are dependent on AI for most of their decision making and problem solving. And this isn't speculation - compared to people who don't use AI, people who frequently use AI underperform in cognitive tasks.
A Surprise Guest Segment???
I asked one of my friends (who is an elementary school teacher) to share their thoughts on AI and education. Instead of sharing their thoughts, they sent me an entire essay! This next section is written by them.
I think my biggest thing and I know this is obvious but kids ability to think for themselves is super important. AI has become the easy way out for lots of students and they aren’t building resilience to work through challenges in their schoolwork, or in their daily lives for that matter. The second they struggle a tiny bit they turn to AI to do the work for them because they’re seeing the path of least resistance, which is incredibly human but also incredibly dangerous. If one day those tools do completely disappear, they’ll need to relearn (or learn from scratch) how to use their own brain and how to push through resistance that’s actually really important when learning. Humans need to have what we call a “productive struggle” in order to learn new things, and again that builds resilience. Which helps long term, not just with academics but with the ups and downs of life in general. So completely removing that struggle to rely on a robot that isn’t perfect, but treating it as if it is a perfect replacement for a human brain, is awful. In schools, we have lots of tools that help students, but the point isn’t to completely remove the necessary “productive struggle” that learning requires. Something like autocorrect on a computer, for example, we eventually have kids using that when learning to type. But it isn’t a replacement for learning how to spell. We have audiobooks to supplement written text to help with processing and hearing language, but they aren’t replacements for learning how to read. Just like AI should never be a replacement for a human brain and learning itself.
Kids get a lot of flack for using AI, but it’s not their fault that these tools are being shoved into their hands from the moment they’re born. Their brains are no where near being developed yet, and the world has given them the ultimate path of least resistance and constant dopamine boosts. If we had access to these tools in school, we’d likely be excited about them and use them too. So I’m not saying all this to demonize kids, it isn’t their fault that their brains have been trained this way. It’s on the fault of the people who have forced AI to become what it is now, and never thinking about the consequences of their actions, or how it would impact young children. It’s also on the faults of adults around them who are ignorantly promoting or ignoring what AI is doing to their child (or students if it’s a teacher pedalling it). So it’s not students faults, but it still hurts to see the damage that’s already been done on such young learners. AI is an attack on so many aspects of what it means to be human, and my job as a teacher is to help students become the best humans they can be. Creativity, critical thinking, problem solving, and learning in the way that humans do is a beautiful thing. When you see a student finally “get it” when they’re learning something new, it’s an incredible thing to witness. You see this joy on a students face, this pride that they’ve finally understood something. That’s the joy and accomplishment that comes from a bit of struggle and hard work. The beauty of the “ah-ha” moment is something AI should never be able to take away.
A Society With No Truth
AI generated photos, videos, and deep fakes are, in many cases, indistinguishable from reality. This poses a problem: Much of society relies on pictures or videos to prove the authenticity of something, whether that be a statement from a politician, a photo of a family member in a hospital, or a video of you committing a crime. While believable and realistic fake pictures and videos have been around for a very long time, AI makes it possible for anyone to make them with very little effort at a massive scale. This ability to essentially manufacture reality has resulted in AI making it significantly easier to spread misinformation. If nothing is done to curb this capability, we may find ourselves in a "post-truth society" where it's nearly impossible to tell what is or is not real.
You're Replacing Yourself
A lot of companies, both the ones making AI and the ones using it, have talked about replacing employees with AI. At the same time, many pro-AI companies are mandating that employees integrate AI into every aspect of their work. (I know because this includes my employer!)
Management and CEOs will cite "increased productivity" as the reason for this, but there is deeper reason: If AI is integrated completely into the workflow of an employee, why do we even need the employee? Some companies have already said that AI is the reason for recent layoffs - so by using AI, you are enabling the AI to do your job, and thus making yourself irrelevant. Even if it takes 5 AIs to do the work of one human, if one AI is 10 times cheaper than one human, the math works out. Why hire 100 humans when 1 human and 100 AIs do the trick?
What if it Doesn't Work Out?
A lot of very powerful people have bet everything on AI working out. But, what happens if it doesn't? The best we can do is speculate, because the situation we're in is unique. However, there does seem to be some consensus around a few things:
- We'll probably see an economic downturn to rival the 2008 financial crisis. Some more extreme takes say it could be as bad as, or worse than, the great depression. (I personally don't think that's true, but a 2008 level crisis seems believable to me)
- If investors lose confidence in AI, trillions of dollars in investments will be wiped out.
- If AI companies can't pay off their debts, trillions of dollars will end up in default.
- If new nuclear reactors get built for AI, (like the U.S. wants to do) but the AI goes away, then we'll have a bunch of nuclear reactors sitting around. Cheaper energy for everyone! (this is, surprisingly, good)
- If enough people become dependent on AI before it "doesn't work out", the loss of AI will pose a significant problem for all of those people. I have no idea what this would entail, but it would probably be bad. (see the section above, "A Society Where Nobody Learns")
tl;dr: there's a lot riding on AI going well, so if it doesn't go well, the fallout will be massive. I think if we, as a society, start pumping the brakes now, we can lessen the fallout or even prevent anything from going wrong to begin with. Or in other words, if we slow down and think for a minute, we can find safer, more sustainable ways to make AI work for everyone.
Oh, and by the way: It won't work out.
OpenAI's investors are worried, 95% of organizations using AI have not seen the promised returns on their investments, (the productivity gains just aren't there) Microsoft has managed to turn 30% of the company's code is written by AI into more bugs and outages than ever, and cutting-edge models just aren't very good at what they're supposed to do: Summaries are often wrong, they make mistakes that humans would never make, and no matter how much "thinking" the AI does, it's not capable of learning or reasoning. To top it all off, people are tired of companies putting AI in everything. This is all part of a larger trend that indicates AI will not, in fact, work out.
But... What if it's Worth It?
Right, so, let's say I've convinced you: AI right now is bad. But, what if it's worth it? What if AI will be so good, and so revolutionary, that the many people we lose along the way will be worth it? Well, to be honest, I can't see the future - maybe we'll achieve AGI, and then the entire planet will be ruled by a benevolent superintelligence that makes everyone's lives as good as they possibly can be. But... Do you really, genuinely believe that's what will happen? Do you think that people like Sam Altman, Elon Musk, or Dario Amodei really have our best interests at heart, and that if they figure out AGI, they will only use it for the good of all humanity?
I don't.
Well, what the hell am I supposed to do about it?
This is a very good (and very important) question! The truth is, as an average person, there's not a whole lot you can do. Your employer might be forcing you to use AI at work, and I don't expect you to give up being able to pay rent. But "not a lot" is still more than nothing.
You can lead by example: If you feel safe doing it, refuse to use AI at work. You'll find there are more people than you might think who agree with you. And if enough employees don't use AI, companies might start thinking twice about integrating it into everything.
If you live in a democratic country, you can write to your government representatives, and ask them to regulate the AI industry. This is particularly relevant if you live somewhere where new data centres are being built, because regulating (or not regulating) AI will impact you directly.
If you're not an average person, and you have some level of influence over how AI is used in your organization, use that influence! Choose what is right for everyone, not only what is right for the shareholders.
At the end of the day, the AI race is a game of chicken: Everyone is "doing AI" as quickly as possible only because everyone else is. As soon as one organization flinches, others will too. All that needs to happen is for our leaders to step back, assess the situation, and figure out how we can do this safely and sustainably.