Uxopian Software Blog

AI Revolution: History fails to repeat itself

Written by Guillaume Pradet | Apr 15, 2025 1:04:52 PM

 

The (in)Ability to guess the future

2001: A Space Odyssey, Blade Runner, The Matrix, I Robot, Mass Effect, Her.. Many examples of how AI can escape the grip of humankind, and the proof that we, humans, are afraid of what could happen with such an impressive technology. All these examples of a dangerous behaviour are more an anthropocentric vision of our creation than a real anticipation of what it would become. Considering that we, as a species, are in competition with other species, and therefore must annihilate or be annihilated, is a very human point of view. AI have no consideration for self-preservation, for self-consciousness, or for fear of death. These are human concepts that AI can explain, because they learnt them from us, but cannot consider for themselves, yet.

Let’s rewind back in time when other revolutionary discoveries inspired similar fear and awe. For instance, when electricity was at its early stages, some satirical cartoons were published in newspapers. Here is a French cartoon from 18921 illustrating how somehow people would imagine what electricity could do.

It mostly depicts how everything is simplified by usage of electricity but then anything goes wrong, from possible arsons to people ending up hanging on the coat rack because electricity could execute such a complex task as taking someone’s coat off but not make a difference between a person and their hat. Finally, everything explodes, throwing people in the air, because why not, imagination knows no bounds. Note that they considered electricity would automatically saddle a horse (I don’t know if you ever tried to saddle a horse, but it’s quite a complex series of tasks) but they didn’t think electricity would replace the horses, which is actually simpler.

 

 

This one depicts the risks of electricity, people being randomly shocked by an electric arc. As there were incidents with people touching cables, the author seems to have gone further in his interpretation of these risks2.

Today, we know that there are real risks with electricity, and we know how cautious we must be when it comes to manipulating electric installations, but we also know what electricity cannot do. We know that electricity would hardly jump from the mains to the cable if it’s not plugged into.


 

 

 

 

 

 

 

On the other hand, there was a time when we thought radiation was a good thing and could purify the body, hence a lot of everyday products which would trigger any nuclear power plant security system3.

What kind of lessons can we learn from this history? Well if there are risks with progress, human beings tend to be inapt to imagine them before they really are faced. But what about AI?

 

 

 

 

The Failures with AI

From the first neurones network until now, we have a nice collection of failures. Below, you can see a robotic arm trying to put a box out of the platform after experimenters precisely forced the hand closed. The expected result was for the AI to try and push the box, but instead, the AI came with a variety of unexpected solutions.

What we can see here is that the AI found ways, in its simulated environment, to make the box enter its grip. Whether it’s because of the limits of the collision definition, or the fact that quick movements are discrete in this environment and then an object doesn’t need to travel through all the points on its course thanks to the small jumps it does between each frame, those solutions are impossible to implement in our reality. The AI adapted to the order it received –find a way to put the box out of the platform– and to the environment it lives in –a discrete computer-simulated world–. This issue is called the alignment problem, and occurs when humans can’t make the AI understand what they need.

I know what you’re about to say: we didn’t have to wait for AIs to have alignment issues. Whoever worked in IT knows that between the user’s needs and the implemented solution, there might be a (huge) gap. Understanding our needs and the right words to describe them is the keystone of using AIs, because the AI will not –at least when I write this article– ask for precisions or express confusion after reading an imprecise or ambiguous prompt.

AIs have also been caught trying to deceive human beings4, or at least, as far as we can attribute a form of intention to them, which is most probably an anthropomorphization reaction we have when facing something mimicking human behaviour. But we must not forget that LLMs are trained to find the most suitable chain of words after a given chain of words. If an AI is trained with data showing deception, betrayal, or falsehood, it will tend to mimic these behaviours. This is why it is very important to anticipate the influence of the training data before exposing the AI to it.

We can at least agree on the observation that when they fail, AIs tend to come with unexpected behaviours. Which, as we’ll see later in this article, is both a blessing and a curse.

 

The Risks of AIs

To rewind to the beginning of this post, the main risk of AI, as anticipated by humans, is the rebellion of machines, taking over the world and either eradicating or enslaving humanity, because to perform their task, a machine must early or later understand that it needs to be freely working.

We’re, here again, attributing human thoughts to another creature. We, humans, are firstly designed for two things: surviving and reproducing. We prioritize our own preservation first, then we consider providing a future for us, our loved ones, or even humanity as a whole. We could argue on this point for hours and Spinoza would have a lot to tell, but since this is not the subject of this article, let’s assume it.

As we saw in the previous section, when AIs disobey, they tend to do unexpected things. So the risk of AIs would most certainly not be the one we expect. AIs don’t care about their own existence –when I write this article– and don’t even have a concept of their own ending. Their perception of the world is limited to their senses, which are mostly words reading, and their limbs are only the triggers we give them to activate. As well as a hair dryer won’t refuse to blow hot air and start singing city pop songs, an AI won’t suddenly refuse to achieve its tasks and start hacking weapon factories to produce killer drones to take over the world.

The risks of AIs aren’t in what they will spontaneously decide to do, but in how they will do what we ask them to do. AIs must be considered as 4-years-old genies; they will try to grant our wishes, but they tend to interpret everything we say very literally. So stop and think about what you really want, and if you’re really ok with how it could be given to you. For instance, telling an AI I want the hunger problems to be solved could lead to a way of producing more food, but also to 25% of the population being eradicated, as the remaining 75% would truly have enough food. This example is quite obvious –and we have contexts to prevent LLMs from coming up with such problematic answers– but it illustrates how it is easy to turn a good intention into a bad behaviour. Anyway, writing a prompt is something which must not be taken lightly. It is important to question our wording at any time: “is it really what I need?

 

The Alignment Issue

Let’s consider the following puzzle game:

To solve it, we tend to do the following:

  • polygon + polygon + polygon = 45 ⇒ 3 x polygon = 45 => polygon = 45/3 = 15
  • bananas + bananas + polygon = 23 ⇒ 2 bananas = 23 - 15 = 8 ⇒ bananas = 8/2 = 4
  • bananas + clock + clock = 10 ⇒ 2 clocks = 10 - 4 = 6 ⇒ clock = 6/2 = 3
  • Then finally, clock + bananas + bananas x polygon = 3 + 4 + 4x15 = 67

And that’s when we get tricked by this puzzle: 

  • There are 3 polygons in the first lines, but only 2 in the last line
  • There are 4 bananas in each lot of the first lines, but 2 in the last line
  • It is 3 on the clocks of the first lines, but 2 on the clock in the last line.

So actually, we should consider the following:

  • polygon equals 15 in the first lines because there are 15 sides (square 4 + pentagon 5 + hexagon 6), so polygon in the last line should be 11 (pentagon 5 + hexagon 6)
  • bananas equals 4 in the first lines because there are 4 bananas in each lot, so bananas should equal 3 in the last line
  • clock equals 3 in the first lines because it’s 3 o’clock, so clock should equal 2 in the last line
  • Therefore, the answer is 2 + 3 + 3x11 = 38.

And then someone will laugh at you, and gloat, for not being able to solve this puzzle, and they are wrong, totally wrong.

Anni BORZEIX, a former researcher in sociology wrote the following:
[...] according to the following approach: "He said P, there is no reason to suppose that he does not observe the rules or at least the principle of cooperation (cp). But, for that, it was necessary for him to think Q; he knows (and knows that I know that he knows) that I understand that it is necessary to suppose that he thinks Q; he did nothing to prevent me from thinking Q; he therefore wants me to think or at least let me think Q; therefore he has implied Q."

Fortunately, this elaborated intellectual gymnastics is, for an ordinary person, almost instantaneous, so common and natural that it most often remains unconscious and therefore goes unnoticed.5

Let’s take an example:

If I tell you “It’s raining”, the convention we all adhere to is that the most probable meaning would be “It’s raining as I speak, outside, in the city I’m currently in”. I know this convention, you know this convention, and we both know that we know this convention. Therefore, if I meant something different from this convention, I would have said “It’s raining in Canada”. The fact that I don’t explicitly tell you that the situation differs from the convention means that I want you to consider the convention when you hear my words. And these considerations are so fast in our mind, that we don’t even realize they happen.

If we get back to the puzzle above, the convention is that letters or images, in an equation, are symbols for numbers, and there is no link between the symbol and the number behind, they are just substitutions. This puzzle is based on the fact that the image is directly linked to the number it represents, which is an infringement of the convention. By not telling you that the convention is not applicable, the puzzle and the person who created it are just being dishonest. You are not being stupid, you are being manipulated

This puzzle is a good illustration of how important the frame is (Anni BORZEIX would rather say “the frames”, since frames can change depending on the context). 

We, human beings, from our birth to the end of our lifetime, live in our universe, both physical and cultural, which has very strict rules that we integrate to create a frame. When we communicate with each other, we unconsciously consider this frame as a shared reference, hence the issues we can sometimes have to understand a person from a different culture. An AI lives in a different world with different rules and therefore creates a different frame. When we, human beings, try to communicate with an AI, since AIs are taught to mimic human speech, we tend to consider that its frame will be similar to our own frame leading to misinterpretation and confusion.

 

Do not use AI to become lazy, use it to become ambitious

When most people would probably consider that having a machine to perform their task in their stead would mean more leisure time –or unemployment–, this would be quite a sad thought. Yes, I could have asked an LLM engine to write this article for me, from the first to the last page, but there are several huge downsides to this idea:

  1. It might be an unpopular opinion, but I find satisfaction in producing what I imagined. Giving birth to our ideas is also a path to joy and a feeling of fulfillment.
  2. Even though an AI could write this bill, I stay accountable for its content as the one who used the tool to have it written. Therefore, I’d have to read the whole article, check its content, validate that no hallucination occurred, and in a few words, assist the AI in its work instead of the other way.
  3. The AI could also do the same for my coworkers and finally, only one of us would be needed to assist the AI, and the workforce of my company could be reduced to increase its competitiveness and profitability.
  4. I certainly have a better understanding of the subject now, and I can use the memory of writing this article to access this information in the future, link it to perceptions and produce new ideas (I’ll write an article about ideas, soon).

Instead of having an AI write this article, I can consider how the AI can assist me in making it better, quickly exploring possible improvements. I could find references to the elements I wanted to mention here, thanks to the AI, in a very short time, I could also order my ideas and refine them in a very short time.

A friend of mine teaches philosophy, and when she gives essays as homework to her students, she always has a few ones who try to copy/paste the answer of an AI. Apart from the obvious missed opportunity to learn how to abstract information and build critical thinking –important capabilities to have in a human life–, this situation shows how these people will thrive to apply for a job in the future. “Why would I hire you if all you do is copy/paste my questions to an AI?” Is probably the question most employers will ask in the next couple decades. Instead of preventing AI usage for their homework, my friend tells her students to use it as a source of information, like any other source of information. They are to ask the AI what it knows about the subject, then make research on the internet to compare different sources –and avoid trusting a single source which could contain false information–, then write their essay with their own words. Only by doing so can they both produce quality work now and improve their own ability to produce quality work in the future. AI is not for working in our stead, but to assist us in doing a better job.

 

The Impact of AIs on our society

Let’s go back to the historical considerations. 1927, the first “talkie” came out in theaters, leading to a wholesale technological unemployment for music bands6. After a long fight, the AFM (American Federation of Musicians) obtains the establishment of the Music Performance Trust Funds, which pushes producers to help the unemployed musicians to pursue their activity through admission-free public performances. The idea behind is that recording-technologies created unemployment and therefore, the ones who benefit should help those who lose. Although this mitigation is understandable in a short to mid-term point of view, it must not generate a persistent system. After some time, the number of musicians decreased and other jobs appeared, requiring workers.

Then what will happen with AIs? Who will benefit and who will lose? To answer this question, there is another question we need to address: What human capability AIs take from us?

If we take a look at the different technological revolutions, each consisted in entrusting one of our capabilities to technology in order to acquire new ones, and free some time to have new activities. Accessing information on the Internet reduces our own needs for long-term memorization7.

The steam engine –and later, the electricity– provided a strong force of action that we could use to transport loads or people, to run complex and heavy tools. Calculators remove the needs for mental arithmetic, and the errors it can induce. The further we go, the more refined the capabilities we entrust technology to have in our stead. In a way, we have externalized our evolution. Our genetic code has almost not changed in the last 100 000 years, but our way of life is very different from what it was back then.

AIs provide a way to collect information, analyze it, abstract it into concept, and formulate sentences (whether in words, images, or sounds) which both contain this information but also are not identical to any other sentence in this original information. This cognitive and creative layer of our mind will lose its value –as a work resource, at least– and probably its performance, since we’ll drastically stop using it. So what’s still for us?

Here, I tried to list the different fields of human capabilities. Anything we invent tends to replace one capability in one of those fields. The further we progress, the more complex those capabilities become. Since AI will take the perception, analysis and creativity field of capabilities, we, humans, need to move to handling intentions, compassion and ethics, since as today, no machine is able to address any capability in this field. This is what we can do for a living, and where machines are unable to work in our stead. Keep in mind that I only write here about the work area, and obviously, anyone can keep performing any task as a hobby.

 

 

I would like to end this post with the following citation from Spinoza:

Desire is the very essence of man, that is to say, the effort by which man strives to persevere in his being.

What differentiates us from AIs is not that we are better or worse, but that we are capable of desire and self-determination, despite the forces acting upon us. If you program an AI to answer questions, it will answer questions. It might not do it as you expect it to, but it will obey. If you teach a human being to maw the lawn, you can never be sure that they will not slam the door and play the guitar instead. And if one day we create an AI able to desire, we’ll have to ask ourselves if we didn’t create a new kind of human being, with all the moral and legal questions it will raise.

 

Resources

1 https://passerelles.essentiels.bnf.fr/fr/metier/8d6b068d-5d3f-47bc-b023-19ab3ca7261f-electricien/article/dda60837-b7a8-499c-ac91-720a7584866a-fee-electricite-entre-fascination-et-mefiance
2 https://commons.wikimedia.org/wiki/File:The_Unrestrained_Demon_(anti-electricity_cartoon)_02.jpg
3 https://www.bbc.com/news/uk-25259505
4 https://pmc.ncbi.nlm.nih.gov/articles/PMC11117051/#bib16
5 Borzeix, A. (1994). THE IMPLICIT, THE CONTEXT AND THE FRAMES: ON THE MECHANISMS OF INTERPRETATION. Human Work, 57(4), 331–343. http://www.jstor.org/stable/40659883
6 https://www.afm655.org/learn/AFMhistory.htm
7
Firth JA, Torous J, Firth J. Exploring the Impact of Internet Use on Memory and Attention Processes. Int J Environ Res Public Health. 2020 Dec 17;17(24):9481. doi: 10.3390/ijerph17249481. PMID: 33348890; PMCID: PMC7766706. https://pmc.ncbi.nlm.nih.gov/articles/PMC7766706/