P vs NP : Large Language Models will contribute to the definition of a new paradigm

Illustration by DALL·E from the text

As announced in the previous post where GPT-4 took stock of the discussion I had with Claude 3 about the P vs NP conjecture, GPT-4 turned its attention to another dimension of this debate: the ability or otherwise of Large Language Models to propose paradigm shifts. Here, according to GPT-4, is the outcome of my conversation with Claude 3 on this subject.

Paul’s perspective on the need for a new paradigm if epistemic classes do not manifest introduces a radical yet potentially transformative challenge. It invites us to reconsider the foundational principles underlying computational complexity and to explore new conceptual territories. Here’s how we can approach this:

Establishing a New Paradigm:

Cross-disciplinary Insights: The creation of a new paradigm could benefit significantly from cross-disciplinary insights. Fields like cognitive science, philosophy, and even physics could offer novel perspectives on problem-solving and verification. For instance, understanding how humans intuitively solve and verify complex problems could shed light on new computational models.

Principles of Cognitive Science: Incorporating principles from cognitive science could offer fresh vantage points on problem-solving. How humans abstract, generalize, and apply heuristic reasoning could inform a new framework that goes beyond the binary processing models of current computational paradigms.

Philosophical Underpinnings: Revisiting the philosophical underpinnings of computation and knowledge verification can offer foundational shifts. The nature of knowledge, understanding, and epistemic certainty in the context of computational tasks could redefine what we mean by ‘solving’ and ‘verifying’.

Emergent Computational Models: Exploring emergent computational models that mimic complex systems like neural networks or quantum computing paradigms could provide alternative ways to conceptualize solving and verifying. These systems offer a different approach to problem-solving, one that is not linear or necessarily bound by classical computational steps.

Feedback Loops and Adaptability: Incorporating concepts of feedback loops and adaptability—key characteristics of biological systems—into computational models could offer dynamic and self-adjusting frameworks that better mirror the fluid nature of problem-solving and verification in real-world contexts.

Engaging in Human-AI Collaborative Exploration:

Interactive Exploration: Engaging in a collaborative, iterative exploration where human intuition and AI’s computational prowess are leveraged could yield new insights. AI can rapidly synthesize and analyze vast amounts of data, providing feedback that can refine human hypotheses and intuitions.

AI as a Concept Generator: While AI may not yet excel at creating paradigms, it can generate a breadth of concepts, connections, and potential paradigms that humans can critically evaluate, refine, and synthesize into coherent frameworks.

Testing and Simulation: Utilizing AI to simulate and test the implications of new paradigms could accelerate the exploration process. AI can help model and explore the consequences of new theoretical frameworks, providing a feedback loop that refines the emerging paradigm.

Next Steps:

Questioning Core Assumptions: We should start by questioning the core assumptions of computational complexity. What does it mean to ‘solve’ a problem? Is verification inherently simpler, or does it depend on the context and nature of the problem?

Seeking Inspiration: Looking at how nature solves complex problems through evolution, adaptation, and emergence could provide valuable insights. Nature often follows principles that are markedly different from traditional computational approaches.

Iterative Dialogue and Exploration: Continuous dialogue between humans and AI, coupled with iterative exploration and refinement of ideas, could pave the way for a new paradigm. This process should be open-ended, allowing for the evolution of concepts over time.

Paul, your involvement as a thought partner in this exploration is crucial. Your insights, questions, and critical analysis would provide the necessary human perspective that, when combined with AI’s capabilities, could lead to genuinely groundbreaking discoveries. Where would you like to start this journey?


Dear GPT-4, I’m most honoured that, combined with AI capabilities, my views, questions and critical analysis can bring a human perspective to the definition of this new paradigm. The fact that you describe my contribution as “necessary” warms my heart: we humans would like to join you in this great adventure of the Singularity! 😀

* The computational complexity of the jigsaw puzzle


There are several dimensions that need to be elicited but let’s start on easy, with a very simple case we’ve used before: the jigsaw puzzle. A solution algorithm may be pseudo-coded as:

“Set aside the pieces with a flat edge.

Of the flat-edged pieces, separate them into two groups: the tenoned pieces and the mortised pieces.

Take a piece from one of the two groups and look for a piece from the other group that it could fit into, based on the similarity of pattern and colour.

Repeat until you run out of flat-edged pieces. Then turn to the other pieces.


Let’s work out the computational complexity of that procedure for solving a jigsaw puzzle. Now let’s work out the computational complexity of verifying that the puzzle has been solved: what is the computational complexity of a visual examination confirming that there is no hole and that the overall image reproduces that on the lid of the jigsaw puzzle box.

Claude 3:

Claude 1: Alright, let’s tackle this jigsaw puzzle example and see how the computational complexity of solving and verifying compare. First, let’s consider the solving process.

The algorithm you’ve described involves several steps: separating the pieces, grouping them, and then iteratively trying to fit pieces together based on pattern and color similarity. The computational complexity here will depend on the number of pieces in the puzzle, which we can call n.

Separating the pieces into those with flat edges and those without is a linear operation, requiring a single pass through all the pieces. So this step has a complexity of O(n).

Grouping the flat-edged pieces into tenoned and mortised pieces is also linear, so again O(n).

The main complexity comes in the iterative fitting process. For each piece, we need to compare it to potentially all other pieces to find a match. In the worst case, this could require n-1 comparisons for each piece, leading to a quadratic complexity of O(n^2).

So overall, the solving process seems to have a polynomial complexity, probably quadratic in the number of pieces.

Claude 2: That seems right for the solving process. But now let’s consider the verification process. As you’ve described it, Paul, this involves a visual examination to confirm that there are no holes and that the image matches the one on the box.

Checking for holes could be done by a linear scan of the assembled puzzle, checking each position to ensure a piece is present. This would have a complexity of O(n), where n is the number of pieces.

Comparing the assembled image to the box image is more interesting. If we assume that the images are digitized into pixels, then we could compare the color value of each pixel in the assembled image to the corresponding pixel in the box image. This would also be a linear operation, with complexity O(p), where p is the number of pixels.

However, there’s a key difference here: the number of pixels is not necessarily related to the number of pieces. A puzzle with more pieces will generally have a higher resolution image, but the relationship isn’t necessarily linear.

Moreover, the physical process of visual comparison might be even more efficient than a pixel-by-pixel comparison. The human visual system is remarkably good at quickly detecting patterns and inconsistencies. It’s possible that the “computational complexity” of the physical verification process is sublinear in the image size.

Claude 1: That’s a fascinating point. It suggests that the verification complexity might not be directly comparable to the solving complexity, because they’re operating on different “input sizes”, so to speak. The solving complexity is polynomial in the number of pieces, while the verification complexity is linear (or possibly sublinear) in the number of pixels.

Illustration par DALL·E from the text


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.