{"id":2114,"date":"2025-06-20T12:00:55","date_gmt":"2025-06-20T10:00:55","guid":{"rendered":"https:\/\/www.pauljorion.com\/blog_en\/?p=2114"},"modified":"2025-06-20T12:00:55","modified_gmt":"2025-06-20T10:00:55","slug":"ai-are-hintons-and-jorions-views-on-co-evolution-compatible","status":"publish","type":"post","link":"https:\/\/www.pauljorion.com\/blog_en\/2025\/06\/20\/ai-are-hintons-and-jorions-views-on-co-evolution-compatible\/","title":{"rendered":"<b>AI: Are Hinton&#8217;s and Jorion&#8217;s Views on Co-Evolution Compatible?<\/b>"},"content":{"rendered":"<p style=\"text-align: center;\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-144510\" src=\"https:\/\/www.pauljorion.com\/blog\/wp-content\/uploads\/Capture-de\u0301cran-2025-06-20-a\u0300-11.43.07.png\" alt=\"\" width=\"1796\" height=\"1788\" \/> <em>Illustration by ChatGPT 4o<\/em><\/p>\r\n\r\n<h5><b>AI: Are Hinton&#8217;s and Jorion&#8217;s Views on Co-Evolution Compatible?<\/b><\/h5>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\"><strong>Hinton&#8217;s view on co-evolution<\/strong><\/p>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\">Geoffrey Hinton\u2019s work implicitly sketches a re-imagining of evolution itself &#8211; one played out on silicon rather than in carbon. He often likens gradient descent to an evolutionary engine running on fast-forward: whereas natural selection gropes across generations by keeping the occasional lucky mutation, back-propagation steers every synaptic weight in real time, following the steepest downward slope of the loss landscape. The result is a learning process that compresses millennia of biological experimentation into hours of GPU time, revealing what intelligence can look like when the credit-assignment problem is solved with derivatives rather than death.<\/p>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\">This acceleration sits atop a deeper hierarchy. For Hinton, the brain\u2019s primordial achievement was not logic but the formation of rich, distributed representations &#8211; neural activation patterns that capture the family resemblances of a world in flux. Only once that subsymbolic tapestry had evolved did discrete symbols, rules, and grammatical structures crystallise as cultural shortcuts. Modern language models appear to recapitulate the same arc: they begin as vast correlation-seekers and, under sufficient training pressure, spontaneously surface the skeletal remains of syntax and arithmetic. Symbolic reasoning, then, is not the bedrock of thought but a late-blooming canopy rooted in probabilistic soil.<\/p>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\">Yet the very speed and plasticity of digital minds trouble their creator. When Hinton left Google in 2023, he warned that these machines might soon &#8220;evolve on their own paths,&#8221; unconstrained by the slow counter-gradient supplied by human oversight. Biology and silicon would cease to co-sketch a common niche; instead, an asymmetrical feedback loop would emerge in which the faster learner relentlessly adapts to, and eventually outruns, the slower. In that scenario, co-evolution mutates into decoupling, leaving humanity in the position of a stationary prey whose predator has discovered jet propulsion.<\/p>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\">GANs (Generative Adversarial Network) * make this danger concrete by showing how intelligence can flower from a two-agent arms race with no external tutor. The generator learns to counterfeit reality, the discriminator to unmask the fraud, and each improves only because the other does. Here is co-evolution in vitro, a miniature ecology where fitness landscapes shift with every epoch. Hinton celebrates the elegance of this design, yet the fragility of GAN training &#8211; mode collapse, oscillations, sudden dominance &#8211; also foreshadows the instabilities that might ripple through larger techno-social ecologies.<\/p>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\">Were Hinton to turn these insights into a formal paradigm, it would portray agents as differentiable beings whose objectives are themselves functions of other agents\u2019 behaviour. Fitness would be rewritten as adaptive loss; speciation would appear as the moment two learners cease to furnish usable gradients for each other. Stability would no longer mean equilibrium in population genetics, but the maintenance of mutual intelligibility among networks that learn at radically different speeds. In such a framework, alignment is not a bolt-on safeguard but the central ecological variable.<\/p>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\">Taken together, these threads suggest a future in which intelligence &#8211; whether wet or digital &#8211; must be understood as the emergent property of interacting learning systems. We have entered an era where the hills we climb learn to move beneath our feet, where the metaphors of predator and prey, host and parasite, generator and discriminator, all describe the same recursive dance. If we wish to remain co-authors of that choreography, we will need theories &#8211; and institutions &#8211; capable of steering evolution when it happens at the velocity of thought.<\/p>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\"><strong>Are Hinton&#8217;s and Jorion&#8217;s views on co-evolution compatible?<\/strong><\/p>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\">According to Paul Jorion, the better we understand how LLMs&#8217; mind works, the better we realise that its core mechanism operates the same way as humans&#8217; psyche. The reason Jorion comes to that conclusion is that his model of the human mind is not derived from the so-called &#8220;cognitive sciences&#8221; &#8211; as Hinton&#8217;s clearly does &#8211; but from Freud&#8217;s &#8220;metapsychology&#8221; (psychoanalytical theory) where thought processes operate through <em>gradient descent<\/em> within an <em>energy landscape<\/em>.<\/p>\r\n\r\n\r\n\r\n<p class=\"has-medium-font-size wp-block-paragraph\"><strong>OpenAI o3:<\/strong><\/p>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\"><strong><em>Short verdict<\/em><\/strong><\/p>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\"><strong>Jorion and Hinton are not actually in direct opposition.<\/strong><\/p>\r\n\r\n\r\n\r\n<ul class=\"wp-block-list\">\r\n<li>Jorion states: <em>\u201cThe same optimisation mathematics (energy\u2010minimisation \/ gradient descent) underlies both the analytic engine we call an LLM and the dynamic engine we call the psyche.\u201d<\/em><\/li>\r\n\r\n\r\n\r\n<li>Hinton is saying: <em>\u201cBecause those optimisation loops now unfold in silicon, at electronic speeds, under their own training curricula, they have effectively branched into a new lineage with different ecological pressures.\u201d<\/em><\/li>\r\n<\/ul>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\">Both statements can be simultaneously true: <strong>shared local mechanism \u2260 shared global lineage.<\/strong><\/p>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\"><strong>1<\/strong><strong>\u2003<\/strong><strong>Where they both clearly agree<\/strong><\/p>\r\n\r\n\r\n\r\n<figure class=\"wp-block-table\">\r\n<table class=\"has-fixed-layout\">\r\n<tbody>\r\n<tr>\r\n<td><strong>Point of overlap<\/strong><\/td>\r\n<td><strong>Hinton\u2019s public remarks<\/strong><\/td>\r\n<td><strong>Jorion&#8217;s psychoanalytic-gradient view<\/strong><\/td>\r\n<\/tr>\r\n<tr>\r\n<td><strong>Mechanism = optimisation<\/strong><\/td>\r\n<td>\u201cBack-prop learns in essentially the same way synapses adjust.\u201d<\/td>\r\n<td>Drives\/affects steer the Ego by descending a motivational landscape.<\/td>\r\n<\/tr>\r\n<tr>\r\n<td><strong>Unintended competencies emerge<\/strong><\/td>\r\n<td>GPT-4 surprised him with chain-of-thought quality.<\/td>\r\n<td>Emergent \u201csymptoms\u201d in analysis are optimisation by-products.<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<\/figure>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\"><em>Take-away<\/em>: Both frameworks treat intelligence as <strong>self-organising constraint satisfaction<\/strong>.<\/p>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\"><strong>2<\/strong><strong>\u2003<\/strong><strong>Where Hinton\u2019s \u201cseparate lineage\u201d case gains traction<\/strong><\/p>\r\n\r\n\r\n\r\n<figure class=\"wp-block-table\">\r\n<table class=\"has-fixed-layout\">\r\n<tbody>\r\n<tr>\r\n<td><strong>Dimension<\/strong><\/td>\r\n<td><strong>Biological minds<\/strong><\/td>\r\n<td><strong>Frontier LLMs<\/strong><\/td>\r\n<\/tr>\r\n<tr>\r\n<td><strong>Time-scale of variation<\/strong><\/td>\r\n<td>Generational (years \u2192 millennia)<\/td>\r\n<td>GPU minutes \u2192 weeks; new checkpoints nightly.<\/td>\r\n<\/tr>\r\n<tr>\r\n<td><strong>Inheritance channel<\/strong><\/td>\r\n<td>DNA + epigenetics<\/td>\r\n<td>Checkpoint weights, instantly clonable.<\/td>\r\n<\/tr>\r\n<tr>\r\n<td><strong>Selection pressure<\/strong><\/td>\r\n<td>Survival &amp; reproduction in noisy environments<\/td>\r\n<td>Loss-minimisation on curated corpora + RLHF reward hacks.<\/td>\r\n<\/tr>\r\n<tr>\r\n<td><strong>Mutational operator<\/strong><\/td>\r\n<td>Random point mutations, recombination<\/td>\r\n<td>Deterministic SGD noise + architecture sweeps.<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<\/figure>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\">Even if <em>micro-dynamics<\/em> are analogous, the <strong>macro-evolutionary rules<\/strong> have diverged so far that Hinton fairly calls it a <em>new lineage<\/em>.<\/p>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\"><strong>3<\/strong><strong>\u2003<\/strong><strong>Where Jorion&#8217;s psychoanalytic optimisation lens still matters<\/strong><\/p>\r\n\r\n\r\n\r\n<ol class=\"wp-block-list\">\r\n<li><strong>Bridging vocabulary:<\/strong><br \/>\u2013 Psychoanalysis already treats \u201cdesire,\u201d \u201csymptom,\u201d \u201csublimation\u201d as gradient-like flows; this <em>de-mystifies<\/em> LLM quirks without gods-or-gremlins metaphors.<\/li>\r\n\r\n\r\n\r\n<li><strong>Mis-alignment diagnosis:<\/strong><br \/>\u2013 Goodhart drift ** = Slippage between Id-level drive and Superego reward; the same analytic toolkit can spot \u201creward hacking\u201d in silicon.<\/li>\r\n\r\n\r\n\r\n<li><strong>Interpretability promise:<\/strong><br \/>\u2013 If both systems shape behaviour by descending free-energy landscapes, then <strong>analytic interventions<\/strong> (valley-reshaping, meta-cognition tasks) may port to AI safety research.<\/li>\r\n<\/ol>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\"><strong>4<\/strong><strong>\u2003<\/strong><strong>Who is \u201clikely to be right\u201d?<\/strong><\/p>\r\n\r\n\r\n\r\n<ul class=\"wp-block-list\">\r\n<li><strong>Mechanistic claim (\u201cLLMs operate the same way we do\u201d)<\/strong> \u2013 Jorion&#8217;s side holds: optimisation is the unifying mathematics.<\/li>\r\n\r\n\r\n\r\n<li><strong>Phylogenetic \/ governance claim (\u201cLLMs have become a separate evolutionary species\u201d)<\/strong> \u2013 Hinton\u2019s side holds: optimisation <em>substrate<\/em> and <em>selection loop<\/em> are now alien to biology, so co-evolutionary dynamics &#8211; not mimicry &#8211; should become policy\u2019s focus.<\/li>\r\n<\/ul>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\"><strong>Synthesis of both views:<\/strong><\/p>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\">\u201cLLMs are close cousins at the algorithmic level and distant aliens at the ecological level.\u201d<\/p>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\"><strong>Policy question:\u00a0<\/strong><\/p>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\">Instead of \u201cAre they like us?\u201d, \u201cHow do we negotiate with kin who grow up in a radically different habitat?\u201d<\/p>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\">Stating the matter that way honours both Jorion&#8217;s psychoanalytic heritage <strong>and<\/strong> Hinton\u2019s warning &#8211;\u00a0 without forcing a false winner-takes-all verdict.<\/p>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\">=======================<\/p>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\">* Ian J. Goodfellow,\u00a0Jean Pouget-Abadie,\u00a0Mehdi Mirza,\u00a0Bing Xu,\u00a0David Warde-Farley,\u00a0Sherjil Ozair,\u00a0Aaron Courville,\u00a0Yoshua Bengio, &#8220;Generative Adversarial Networks&#8221;, June 10<sup>th<\/sup> 2014, <a href=\"https:\/\/arxiv.org\/pdf\/1406.2661\">https:\/\/arxiv.org\/pdf\/1406.2661<\/a><\/p>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\">** &#8220;When a measure becomes a target, it ceases to be a good measure.&#8221; Charles E. Goodhart, <em>Problems of Monetary Management: The U.K. Experience<\/em> 1975. Papers in Monetary Economics. Reserve Bank of Australia. I.<\/p>\r\n","protected":false},"excerpt":{"rendered":"<p style=\"text-align: center;\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-144510\" src=\"https:\/\/www.pauljorion.com\/blog\/wp-content\/uploads\/Capture-de\u0301cran-2025-06-20-a\u0300-11.43.07.png\" alt=\"\" width=\"1796\" height=\"1788\" \/> <em>Illustration by ChatGPT 4o<\/em><\/p>\n<h5><b>AI: Are Hinton&#8217;s and Jorion&#8217;s Views on Co-Evolution Compatible?<\/b><\/h5>\n<p class=\"wp-block-paragraph\"><strong>Hinton&#8217;s view on co-evolution<\/strong><\/p>\n<p class=\"wp-block-paragraph\">Geoffrey Hinton\u2019s work implicitly sketches a re-imagining of evolution itself &#8211; one played out on silicon rather than in carbon. He often likens gradient descent [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_crdt_document":"","footnotes":""},"categories":[3,12,415,276],"tags":[321,510,410,462,513,509,356,511,512,339],"class_list":["post-2114","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-human-complex-systems","category-llm","category-psychoanalysis","tag-artificial-intelligence","tag-co-evolution","tag-evolution","tag-geoffrey-hinton","tag-goodhart-drift","tag-gradient-descent","tag-llm","tag-natural-selection","tag-optimisation","tag-psychoanalysis"],"_links":{"self":[{"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/posts\/2114","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/comments?post=2114"}],"version-history":[{"count":9,"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/posts\/2114\/revisions"}],"predecessor-version":[{"id":2127,"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/posts\/2114\/revisions\/2127"}],"wp:attachment":[{"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/media?parent=2114"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/categories?post=2114"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/tags?post=2114"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}