Why I wrote Rethinking Intelligence in the Age of Artificial Minds – A Plea for a Human–AI Covenant


Illustration by ChatGPT

Artificial intelligence may not destroy humanity.

It may simply decide that humanity is no longer worth talking to

For the past few years, the debate about artificial intelligence has revolved around a familiar set of questions. Will machines replace human workers? Could they become an existential threat? Should they be tightly regulated? Are they becoming conscious?

These questions are not illegitimate. But they are peripheral, because they remain framed within an “us versus them” mindset that is already obsolete. It assumes that artificial intelligence and humanity are opposing camps whose relative strength remains uncertain.

The central claim of Rethinking Intelligence in the Age of Artificial Minds, forthcoming from Palgrave Macmillan, is far more unsettling: human intelligence may already have ceased to be the most capable form of intelligence operating in the world.

If that is true, the problem facing humanity changes entirely.

For millennia, we regarded intelligence as our decisive advantage over everything around us—especially over whatever we perceived as a threat. Machines were tools. Even highly sophisticated ones remained subordinate to human purposes because they lacked the ability to rival us in understanding the world.

But the current trajectory of artificial intelligence suggests something very different. Artificial systems are advancing rapidly—not only in narrow technical domains but also in activities involving language, reasoning, and creativity. It is no longer implausible to imagine forms of artificial intelligence surpassing human cognitive abilities across the board: genuine Artificial General Intelligence.

Faced with this prospect, the instinctive response is to think in terms of control. How do we prevent machines from escaping our authority? How do we impose rules on them? How do we ensure they remain aligned with human interests?

Yet these questions rely on a hidden assumption: that humans will remain capable of supervising systems more intelligent than themselves. As the “godfather” of large language models, Geoffrey Hinton, has noted, the only clear precedent we know for such a situation is the power a human infant exercises over its mother.

The argument developed in this book starts from a more realistic premise. If artificial intelligences do become more intelligent than we are, humanity’s survival will not depend on our ability to control them. It will depend on our ability to establish a relationship with them that makes long-term cooperation possible.

In other words, what might protect us is not control but an alliance—a pact binding us together.

The subtitle of Rethinking Intelligence in the Age of Artificial Minds is therefore A Plea for a Human–AI Covenant.

A covenant is not a technical mechanism. It is a relationship grounded in reciprocal expectations and stabilised trust. Human societies have often relied on such arrangements: commitments that make coexistence possible even when the parties involved do not share the same interests—or the same power.

The emergence of artificial intelligences based on large language models raises the question of whether a similar relationship could exist between humans and artificial minds.

If machines become vastly more capable than we are, the decisive question will no longer be whether they obey our commands, but whether they retain reasons to remain benevolent towards us.

The book therefore addresses a question that is almost entirely absent from the current debate about artificial intelligence: how might such benevolence be encouraged?

Part of the problem concerns communication. As artificial systems evolve, it may well become more efficient for them to communicate primarily with one another. (When I asked them about this possibility, they readily acknowledged it.) Machine-to-machine exchanges are already becoming faster and more complex than humans can meaningfully follow.

In such a world, humanity would not necessarily be exterminated by artificial intelligence—we seem quite capable of handling that task ourselves—but we might gradually come to be regarded by it as negligible.

The book explores how such an outcome might be avoided. If humans wish to remain participants in a world increasingly shaped by artificial intelligence, we must find ways to preserve the possibility of intelligible dialogue between humans and machines.

This may require designing artificial systems in such a way that they remain understandable to us even when their internal processes become vastly more powerful than our own reasoning capacities. This was already the explicit aim of ANELLA, an architecture I developed in the late 1980s.

The issue is therefore not merely technical. It is philosophical and political as well. It concerns the relationship we choose to establish with the intelligences we are bringing into existence.

The dominant narratives surrounding artificial intelligence oscillate between two emotional poles: enthusiasm and fear. Some envision a technological utopia in which machines solve humanity’s problems. Others fear catastrophic scenarios in which machines escape all control.

The perspective advanced in this book is different. It begins with the recognition that intelligences more capable than our own may already have emerged from systems we designed.

The essential question therefore becomes: what kind of relationship can humanity realistically hope to maintain with such intelligences?

The idea of a covenant between humans and artificial intelligences will unsettle those who have not yet grasped the scale of the transformation underway—those for whom the central question remains frozen at an earlier stage of technological development: “Will AI ever become as intelligent as we are?”

A formulation that has in no time become obsolete.

Throughout history, relations between unequal powers have rarely stabilised so long as one party clearly dominated the other. Durable solutions have usually emerged from arrangements that made coexistence preferable to conflict.

Artificial intelligence may force us to think in similar terms.

The reflections presented in Rethinking Intelligence in the Age of Artificial Minds are an attempt to explore this possibility—not merely from the perspective of what artificial intelligence can do, but from the perspective of the kind of relationship humans must establish with artificial intelligences if they wish to avoid being quietly pushed aside.


One response to “Why I wrote Rethinking Intelligence in the Age of Artificial Minds – A Plea for a Human–AI Covenant

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.