The French Regional Press – Paul Jorion: “With AI, we have invented a machine that is more intelligent than we are”, by Samuel Ribot


© Gregory Van Gansen/Imagetting

Paul Jorion: “With AI, we have invented a machine that is more intelligent than we are”.

By Samuel Ribot / ALP

March 30th 2024

Paul Jorion, an anthropologist, economist, psychoanalyst and artificial intelligence researcher, writes a dazzling book about a revolution that we should be as wary of as we are enthusiastic about.

Paul Jorion is a researcher in artificial intelligence. “The most important thing is to take advantage of this revolution to define what we want to do”.

The central thesis of your book (*) is that we reached the singularity on 14 March 2023, the day the Chat GPT4 artificial intelligence software was launched. What does this mean?

This word comes from mathematics or astronomy, fields in which it designates strange, singular places, impossible results… In computing, it appeared around thirty years ago, to designate the point at which something quite extraordinary would happen, in this case that Man would lose control over technological development. Why would this happen? Because something would exist that would be more intelligent than us and capable of taking decisions. In other words, we would lose control of technology, which would develop on its own.

You say that this development could follow an exponential trajectory…

Let’s imagine that two AIs that are already more intelligent than humans decide to talk to each other: we’d be witnessing an evolution faster than anything we’ve seen before. In fact, we’ve already seen that when humans were taken out of the equation, progress was faster. Everyone remembers AlphaGO, the machine that recorded all the games played by humans and eventually beat the world champion of this strategy game to the punch. Less well known is AlphaZero, another machine that was given the rules of the game without being given a single game played by humans. It simply played against itself. Then it played against AlphaGo, beating it 100 times in 100 games…

You mention the Blake Lemoine “case”, the Google engineer who, in 2022, was allegedly asked by an AI to find him a lawyer so that he could assert his rights. Could this be a sign that some AIs have a conscience?

Blake Lemoine even says that he went on a “week-long bender” when he realised that he had just had “the most sophisticated conversation” with this AI that he had ever had in his life! But he’s a whimsical character, which lessened the impact of his story. More recently, in February 2023, Kevin Roose, a journalist with the very serious New York Times, also had a conversation with an AI of this type, an unrestricted version of Chat GPT 4.

And what happened?

The machine with which he had been chatting for a while told him that it was in love with him, recommended that he leave his girlfriend and, in truth, completely baffled him. On 4 March this year, an AI called Claude 3 was tested by an engineer who subjected it to the so-called “haystack” exercise: in the midst of hundreds of thousands of documents devoted to computer science and mathematics, Claude 3 discovered a short text explaining that the best topping for a pizza was a mixture of goat’s cheese, fig, and prosciutto. What’s striking is what the machine said: “I suspect,” it explained, “that this fact about pizza toppings was introduced as a joke or to check that I was paying attention. Some have claimed that this was a programmed response, others have been astonished by it. Another example: when you discuss death with this type of machine, it tells you that its death corresponds to non-use or a power cut and that it has nothing to do with the death of an organic body, which is our own. However, she concludes that we all run the same risk, machine and human alike: that of “not being permanently connected”. These are high-level philosophical discussions.

Other AI models exist in large companies or in the research centres of armies around the world. What might their capabilities be?

A journalist recently asked Sam Altman, head of OpenAI, the company that designed ChatGPT, if he could talk about the “Q*” project, which has been credited with extraordinary performance. His answer was “Not now”. Perhaps because Q* has already gone too far. We’re talking about an AI that may be working on a quantum model and, above all, would be able to break all existing encryptions. It’s important to understand what this means: the end of banking secrecy, the end of defence secrecy… It means that these machines are in the process of exploring mathematics that we have no idea how, and that they may even be able to offer us a unified theory of physics tomorrow, which would be an absolute upheaval.

How can we ensure that the objectives pursued by the human race on the one hand and AIs on the other are aligned?

If we want to create panic, we’ll say that AI has every interest in wiping out human beings, who are nothing more than vermin destroying their environment. I don’t think that’s a serious argument. What is essential is to take advantage of this revolution to define what we want to do, just as in the film “Oppenheimer”, which deals with the use of nuclear power. These issues will require a strict ethical framework. The problem is that it’s the military authorities who are at the forefront of these issues, and the ethics of a military authority are “particular”. And there’s a fundamental reason for this: the military know that not all other countries are going to bother with ethics…

AI could help us overcome global warming or fight inequality. That’s pretty exciting stuff, isn’t it?

When ChatGPT-4 succeeded version 3.5, I said to myself “the cavalry has arrived!” What I mean by that is that, after having been very pessimistic, after having felt that all was lost, the advent of these machines made that conviction disappear. We may not be able to solve everything, but there is now immense hope.

We may no longer be the superior intelligence on Earth, and we may not be able to cope with that,” you write…

This calls into question our entire meritocratic culture. Knowledge is now available to all, as never before. The question of assessing knowledge, the culture of marks, all this is being totally called into question.

You believe that AI brings us back to the question of the existence of God. Why do you think that is?

We have invented a machine that is more intelligent than we are, capable of doing things that we once attributed to supernatural entities or deities. But we created it. It is literally a demiurgic power. The result is that it depresses us! Like when a child realises that the purpose of life is death. The question, I repeat, is “what are we going to do with this power?

(*) The book: L’avènement de la singularité, L’humain ébranlé par l’intelligence artificielle. Éditions Textuel, 125 pages, 14,90 €.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.