Richard J. Kinsey

Everyone loves so-called “AI expert”. They talk like gurus, throw around buzzwords, and promise that your future will be brighter than your present. But the moment you ask the real questions, you get silence. So let’s ask them for them. What you’re about to read never shows up in their conferences or their polished LinkedIn posts.
Table of Contents
ToggleHow much more pollution does your AI generate than your PC?

Photo by pixabay
To answer this question, let’s take a look at the statistics provided by Columbia Climate School. In fact, one study estimates that training GPT-3, which had 175 billion parameters, consumed 1,287 MWh per hour. This can be expected to result in carbon emissions of 502 metric tons of carbon.
This study is accompanied by research papers that estimate 700,000 liters of water and that 10 to 50 average-sized responses from GPT-3 consumed 500 mL of fresh water. And this water consumption is detrimental to people living in areas where tech giants are sometimes the ones depleting groundwater supplies.
In addition, cooling the servers used for training consumes a huge amount of electrical energy. This is becoming a cause for concern, given that predictions for 2027 show that the energy cost of AI could grow from 85 to 134 TWh. In fact, servers must run continuously, and 40% of the electricity used by data centers is redirected to cooling them with air conditioners.
With the demands and advances of artificial intelligence, which must become increasingly powerful, training these models in data centers requires enormous amounts of electricity.
That’s right, as I mentioned, training AI models! Researchers at the University of Massachusetts Amherst trained several language models in a study conducted in 2019, which concluded that 626,000 pounds (283 tons) of CO2 were emitted. The Columbia Climate School compares this emission to the emission of five cars throughout their entire lifetimes. And these analyses are truly frightening.
So yes, some experts may never reveal these secrets for fear of making AI look bad, even though it already is, but it would be pointless to compare it to the consumption of a PC.
Who is really going to lose their job because of AI?
Image by Rosy / Bad Homburg / Germany from Pixabay
We are constantly being told that AI will take our jobs and only those who know how to use it will keep their jobs. I admit that this is partly true for some professions and, as you might expect, false for others, because research by Oxford Academic estimates that technological innovations that automate tasks reduce the creation of new jobs.
And according to another article by Reuters, David Autor, an economist at MIT and winner of the 2005 John Clark Bates Medal, argues that since 1980, the jobs replaced by automation have not been fully offset by new jobs created.
So who will lose their jobs in all this?
I think that all jobs that do not require considerable human effort will be eliminated: cashiers, accountants, office secretaries, etc. Jobs such as plumbers, firefighters, police officers, soldiers, and manual laborers may still exist in the future.
These professions require physical presence, responsiveness, and adaptability that no AI can yet replace. As for the medical profession, we’ll have to wait and see. Will AI ever stop hallucinating? Will it become more reliable, and will we be able to trust it without fear of losing control?
If your model makes a mistake and causes an accident, who is responsible?
In this regard, liability may vary.
Gabriel Hallevy, a professor of Israeli law, proposes several scenarios for determining liability when an AI commits an offense:
Criminal liability: When can AI be considered criminal?
First of all, if the perpetrator is a person with a mental disability, or in some cases a child or even an animal, then they will be exempt from any liability because they do not have the mental capacity required to commit this type of action.
However, responsibility may be assigned to another person if that person explicitly asked the innocent perpetrator to do so. For example, if another person asks a dog to go and wreak havoc in a store, the guilty party will obviously be that other person. Here, AI is treated as an innocent agent that cannot form criminal intent, and responsibility is attributed to its instructor: the programmer or user.
The other possible case is the model of natural and probable consequences, which applies when an AI that was designed for good reasons changes course and commits criminal acts. The example given in Gabriel Hallevy’s study is that of a Japanese worker who was unfortunately killed by an industrial robot that misinterpreted its role.
The robot saw the worker as an obstacle to the completion of its mission and therefore pushed the worker into a nearby machine with its hydraulic arm. It sounds like a scene from the movie M3GAN.
Another model is the direct liability model: in this type of model, the AI system is considered to be the instigator of the act and the intention. One example is speeding, where there is no criminal intent.
Civil liability: is the AI system considered a product or a service?
In civil law, knowing whether an AI system is a service or a product is important because it will determine the applicable legal framework and the level of liability. If it is a product, then it falls under the legislation on design defects. To help you understand, it is as if a car had a brake failure; product liability law would apply and compensation would generally be ten times higher than if it were human negligence.
On the other hand, according to Gabriel Hallevy’s study, if AI is seen as a service, it is considered a provision, much like a lawyer or a driver, and the denial of negligence applies. The law will simply look at whether the service was provided correctly, and the financial penalties are much lower.
To establish negligence (when it comes to a service, don’t forget), three elements must be demonstrated:
The defendant’s duty of care: to put it simply, you need to know that the defendant had a duty of care, i.e., a company, for example, must ensure that its AI does not pose a risk to people.
Breach of this duty: the defendants had a duty of care, but this duty was breached. Perhaps they did not properly monitor, test, or correct the AI, and this led to a breach.
The causal link between this breach and the damage: this simply means that there is a direct link between the fault that was committed and the damage. To simplify things, imagine that an AI causes an accident. We must prove that it was precisely the lack of diligence that caused it, i.e., the fact that they did not properly ensure that their AI did not pose a potential risk to people caused the accident.
AI systems can violate their duty of care in many ways:
Errors detectable by the developer: artificial intelligence may contain bugs or defects that developers should have spotted if they had tested their AI properly. If an AI causes a self-driving car to fail to recognize a red light because of a bug, it is the developer who should have tested it properly.
Incorrect knowledge base: if an AI is trained on data that is unreliable or incorrect, dangerous damage can be expected.
Inadequate documentation: designers must provide clear instructions on how to use the AI, otherwise users may do something else with it, which could cause further damage.
Outdated knowledge: It is essential that security software includes new threats, otherwise you know what will happen. If AI is not updated regularly, it will operate with outdated information.
Inappropriate use by the user: sometimes AI is used in such bizarre ways that it’s almost funny. AI designed for a specific task is used for something else entirely. I’m talking to you, ChatGPT users!
What happens when training data becomes obsolete?
When training data for AI models is no longer up to date, there are two options: either create synthetic data, or risk facing a “model collapse.” This is a hot topic in the world of AI that experts are trying to hide, as the latest models have already assimilated almost all of the existing human knowledge on the web.
Elon Musk has suggested relying on data generated by AI itself to continue training. He even stated, according to an article in The Guardian, that “The cumulative sum of human knowledge has been exhausted in AI training. That happened basically last year.” To get around this problem, tech giants such as Meta, Microsoft, Google, and OpenAI have already started incorporating synthetic data into their models.
However, I don’t think this approach is a good one, and here’s why. If AIs train on their own output, it could lead to a phenomenon called “model collapse,” as explained by Andrew Duncan, director of fundamental AI at the Alan Turing Institute in the UK.
“When you start to feed a model synthetic stuff, you start to get diminishing returns,” according to Andrew Duncan.
The results become less creative, and it feels like the model loses quality, even risking becoming downright “stupid.” To illustrate this phenomenon, imagine taking a text and translating it 100 times in a row into different languages using a tool such as Google Translate, before translating it back into the original language. The final text will bear no resemblance to the original.
It will be nothing more than a series of clumsy, meaningless sentences. That’s how I like to see things if we train future models with synthetic data. The equivalent of these clumsy sentences would be the collapse of the AI model, which would then only produce “garbage.” And an AI trained on “garbage” can only produce… garbage.
The Uncomfortable Truth
The truth is that no one has all the answers. And if someone claims they do, especially in AI, you can bet they’re hiding part of the puzzle. These are the questions they dodge, the ones that expose the limits, the contradictions, and the polite lies. If you want to understand the future of AI, stop listening only to the people shining on stage. Pay attention to what they leave unsaid. That’s where the truth begins.
