{"id":2128,"date":"2025-07-04T11:28:39","date_gmt":"2025-07-04T09:28:39","guid":{"rendered":"https:\/\/www.pauljorion.com\/blog_en\/?p=2128"},"modified":"2025-07-04T11:28:39","modified_gmt":"2025-07-04T09:28:39","slug":"ai-how-would-francois-chollet-criticise-paul-jorions-manuscript","status":"publish","type":"post","link":"https:\/\/www.pauljorion.com\/blog_en\/2025\/07\/04\/ai-how-would-francois-chollet-criticise-paul-jorions-manuscript\/","title":{"rendered":"<b>AI: How would Fran\u00e7ois Chollet criticise Paul Jorion&#8217;s manuscript?<\/b>"},"content":{"rendered":"<p><iframe loading=\"lazy\" title=\"YouTube video player\" src=\"https:\/\/www.youtube.com\/embed\/5QcCeSsNRks?si=Y6zqs1thw_ioceOr\" width=\"700\" height=\"450\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_start\">\ufeff<\/span><\/iframe><\/p>\r\n<p>In the same vein as my <a href=\"https:\/\/www.pauljorion.com\/blog_en\/2025\/06\/20\/ai-are-hintons-and-jorions-views-on-co-evolution-compatible\/\" target=\"_blank\" rel=\"noopener\"><b>AI: Are Hinton\u2019s and Jorion\u2019s Views on Co-Evolution Compatible?<\/b><\/a> a fortnight ago. Today, this <strong>AI: How would <a href=\"https:\/\/en.wikipedia.org\/wiki\/Fran%C3%A7ois_Chollet\" target=\"_blank\" rel=\"noopener\">Fran\u00e7ois Chollet<\/a> criticise Paul Jorion&#8217;s manuscript?<\/strong>.<\/p>\r\n<p>The manuscript being entitled <em>Rethinking Intelligence in the Age of Artificial Minds<\/em>, forthcoming with Palgrave-Macmillan.<\/p>\r\n<h5>OpenAI o3 :<\/h5>\r\n\r\n\r\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\r\n\r\n\r\n<h2 class=\"wp-block-heading\">1 Where your manuscript resonates with Chollet<\/h2>\r\n\r\n\r\n\r\n<figure class=\"wp-block-table\">\r\n<table class=\"has-fixed-layout\">\r\n<thead>\r\n<tr>\r\n<th>Theme in your text<\/th>\r\n<th>Why Chollet would nod along<\/th>\r\n<\/tr>\r\n<\/thead>\r\n<tbody>\r\n<tr>\r\n<td><strong>Anti-anthropocentric stance<\/strong> \u2013 you stress that IQ tests and Bar-exam scores are parochial and propose an \u201cAlien Intelligence Benchmark Suite\u201d (AIBS)<\/td>\r\n<td>Chollet\u2019s own ARC benchmark was motivated by the same worry: standard leader-boards reward pattern-recall rather than abstraction.<\/td>\r\n<\/tr>\r\n<tr>\r\n<td><strong>Need for causal world-models<\/strong> \u2013 AIBS pillar #1 targets counter-factual prediction without linguistic labels<\/td>\r\n<td>ARC tasks, and his recent program-synthesis work at Ndea, likewise force agents to infer latent generative structure, not surface statistics.<\/td>\r\n<\/tr>\r\n<tr>\r\n<td><strong>Goodhart &amp; Ashby risks<\/strong> \u2013 you show how proxy metrics drift under optimisation pressure<\/td>\r\n<td>Chollet often cites exactly these two cybernetic failures when he argues that bigger LLMs are <em>not<\/em> safer by default.<\/td>\r\n<\/tr>\r\n<tr>\r\n<td><strong>Hybrid future<\/strong> \u2013 you sketch neurosymbolic architectures where \u201ca neural core supplies rich representations while symbolic modules verify them\u201d<\/td>\r\n<td>His public talks since 2023 have argued that AGI will be \u201cdeep-learning-guided program synthesis\u201d, i.e. the same hybrid.<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<\/figure>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\">Take-away: you already share several pillars of his worldview.<br \/>The criticisms below are therefore mostly <em>directional adjustments<\/em>, not wholesale disagreement.<\/p>\r\n\r\n\r\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\r\n\r\n\r\n<h2 class=\"wp-block-heading\">2 Likely points of criticism<\/h2>\r\n\r\n\r\n\r\n<figure class=\"wp-block-table\">\r\n<table class=\"has-fixed-layout\">\r\n<thead>\r\n<tr>\r\n<th>Manuscript claim (paraphrased)<\/th>\r\n<th>Chollet\u2019s probable rebuttal<\/th>\r\n<th>Supporting passage he might quote<\/th>\r\n<\/tr>\r\n<\/thead>\r\n<tbody>\r\n<tr>\r\n<td><strong>\u201cScaling is the engine of the AI renaissance; semantics, syntax and even proto-selfhood bubble up as we just make the network bigger.\u201d<\/strong><\/td>\r\n<td><em>Over-generalised.<\/em> Chollet accepts emergent phenomena, but argues that <strong>sample-efficient skill acquisition<\/strong> stalls long before \u201cgeneral\u201d intelligence. He would ask for empirical evidence that your scaling story still holds once tasks are truly out-of-distribution or require causal interventions.<\/td>\r\n<td>\u201cWe realised that the trick was just to make the network bigger\u2026 And the semantics would follow\u201d<\/td>\r\n<\/tr>\r\n<tr>\r\n<td><strong>AI has already \u2018surpassed\u2019 human intelligence in quality (invoking Hinton &amp; Hofstadter).<\/strong><\/td>\r\n<td><em>Inflated benchmark.<\/em> In <em>On the Measure of Intelligence<\/em> he distinguishes <strong>peak task performance<\/strong> from <strong>generalisation ability per unit experience<\/strong>. Current LLMs score high on the former, low on the latter, so \u201csurpass\u201d is premature.<\/td>\r\n<td>\u201cAn artificial intelligence of a quality apparently higher than the natural one that was meant to be emulated\u201d<\/td>\r\n<\/tr>\r\n<tr>\r\n<td><strong>Understanding is ballast; performance is separable from explanation.<\/strong><\/td>\r\n<td>He would agree that internal \u201cverbal reasoning\u201d is post-hoc, but still insists that <strong>explicit, inspectable programs<\/strong> (not opaque activations) are the safest route to reliability and <em>debuggability<\/em>.<\/td>\r\n<td>\u201cPerformance is separable from explanation\u2026 understanding can be detached\u201d<\/td>\r\n<\/tr>\r\n<tr>\r\n<td><strong>Compression-equals-intelligence thesis.<\/strong><\/td>\r\n<td>Chollet\u2019s definition adds a second axis \u2013 <strong>priors<\/strong>. Two systems with equal compression might differ hugely in how <em>little<\/em> data they needed to reach it. He would likely call for a notion of <strong>efficiency<\/strong> missing in your current formulation.<\/td>\r\n<td>\u201cIntelligence may be best characterised as prediction and compression across modalities\u201d<\/td>\r\n<\/tr>\r\n<tr>\r\n<td><strong>Benchmarks: AIBS vs ARC.<\/strong><\/td>\r\n<td>He might praise AIBS for attacking embodiment bias, yet ask: 1) does it measure <em>data-efficiency<\/em>? 2) do rich 3-D sims leak too many priors, letting pure memorisation creep back in?<\/td>\r\n<td>\u00a0<\/td>\r\n<\/tr>\r\n<tr>\r\n<td><strong>Psychoanalytic framing.<\/strong><\/td>\r\n<td>Chollet tends to avoid mentalistic vocabulary (\u201cwill\u201d, \u201cdrive\u201d) in technical writing; he might view the psychoanalytic detours as evocative but <strong>orthogonal<\/strong> to a falsifiable theory of AGI.<\/td>\r\n<td>\u00a0<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<\/figure>\r\n\r\n\r\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\r\n\r\n\r\n<h2 class=\"wp-block-heading\">3 Concrete tweaks to pre-empt (or incorporate) his critique<\/h2>\r\n\r\n\r\n\r\n<ol class=\"wp-block-list\">\r\n<li><strong>Add an \u201cefficiency\u201d dimension.<\/strong><br \/>Pair your compression metric with <em>bits-of-experience consumed<\/em> to solve each novel task. This answers his central question: <em>\u201cHow much can you do with how little?\u201d<\/em><\/li>\r\n\r\n\r\n\r\n<li><strong>Clarify the limits of scaling.<\/strong><br \/>A short empirical appendix surveying where GPT-4-class models <em>still<\/em> fail (ARC private split, compositional generalisation, causal interventions) would balance the current \u201cmiracle of scaling\u201d tone.<\/li>\r\n\r\n\r\n\r\n<li><strong>Position AIBS relative to ARC.<\/strong><br \/>Explain why AIBS is a <em>superset<\/em>: ARC probes abstract reasoning in a language-free grid; AIBS extends the idea to longer horizons, embodiment and energy budgets. That shows lineage instead of rivalry.<\/li>\r\n\r\n\r\n\r\n<li><strong>Explain why psychoanalysis is more than metaphor.<\/strong><br \/>If you keep that lens, tie each Freudian or Lacanian concept to a measurable computational analogue (e.g., repression \u2194 gradient clipping on harmful activations). Otherwise readers in Chollet\u2019s camp may dismiss it as narrative flourish.<\/li>\r\n\r\n\r\n\r\n<li><strong>Flag speculative passages.<\/strong><br \/>You uphold the \u201ccovenant\u201d by clearly labelling forward-looking sections (e.g., Phase III post-sovereign autonomy) as <em>scenarios<\/em>, not predictions. Chollet appreciates when visionary claims are quarantined from descriptive ones.<\/li>\r\n<\/ol>\r\n\r\n\r\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\r\n\r\n\r\n<h2 class=\"wp-block-heading\">4 Bottom line<\/h2>\r\n\r\n\r\n\r\n<p class=\"wp-block-paragraph\">Your manuscript already speaks to several of Fran\u00e7ois Chollet\u2019s deepest concerns\u2014non-anthropocentric metrics, causal abstraction, hybrid architectures.<br \/>Where he would press hardest is on <strong>data-efficiency, programme-level transparency and the real generalisation ceiling of scale-alone systems<\/strong>. Addressing those three axes\u2014perhaps in a dedicated chapter or boxed sidebar\u2014would make the book far less \u201copen to Chollet\u2019s criticism\u201d and far more in dialogue with his agenda.<\/p>\r\n\r\n\r\n","protected":false},"excerpt":{"rendered":"<p><iframe loading=\"lazy\" title=\"YouTube video player\" src=\"https:\/\/www.youtube.com\/embed\/5QcCeSsNRks?si=Y6zqs1thw_ioceOr\" width=\"700\" height=\"450\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_start\">\ufeff<\/span><\/iframe><\/p>\n<p>In the same vein as my <a href=\"https:\/\/www.pauljorion.com\/blog_en\/2025\/06\/20\/ai-are-hintons-and-jorions-views-on-co-evolution-compatible\/\" target=\"_blank\" rel=\"noopener\"><b>AI: Are Hinton\u2019s and Jorion\u2019s Views on Co-Evolution Compatible?<\/b><\/a> a fortnight ago. Today, this <strong>AI: How would <a href=\"https:\/\/en.wikipedia.org\/wiki\/Fran%C3%A7ois_Chollet\" target=\"_blank\" rel=\"noopener\">Fran\u00e7ois Chollet<\/a> criticise Paul Jorion&#8217;s manuscript?<\/strong>.<\/p>\n<p>The [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_crdt_document":"","footnotes":""},"categories":[3,12,415],"tags":[342,321,514,356],"class_list":["post-2128","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-human-complex-systems","category-llm","tag-artificial-general-intelligence","tag-artificial-intelligence","tag-francois-chollet","tag-llm"],"_links":{"self":[{"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/posts\/2128","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/comments?post=2128"}],"version-history":[{"count":4,"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/posts\/2128\/revisions"}],"predecessor-version":[{"id":2132,"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/posts\/2128\/revisions\/2132"}],"wp:attachment":[{"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/media?parent=2128"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/categories?post=2128"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.pauljorion.com\/blog_en\/wp-json\/wp\/v2\/tags?post=2128"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}