Large Language Models (+PJ) tackle emergence ! III. Information gains when a graph is transposed into its dual


Illustration by DALL·E (+PJ)

P.J.:

Ok, if it seems plausible to you that there are emergent phenomena when transposing from the original graph to its dual, how would you explain that there is a gain in information in the process (a gain which is blatant when the process is reversed and a loss of information is conversely glaring)?

Claude 3:

The apparent gain in information when moving from a semantic network to its P-Graph dual, and the corresponding loss of information when reversing the process, is indeed a fascinating puzzle. It seems to challenge our intuitions about information preservation in graph transformations.

One potential explanation is that the P-Graph representation is actually capturing latent information that is implicit in the structure of the original semantic network, but not explicitly represented in its nodes and edges.

Consider the example of adding “Lassie” to the semantic network. In the P-Graph, this resulted in the emergence of a new “dog” neuron with its own unique set of connections. This suggests that the semantic network implicitly contained information about the potential for multiple instantiations of the “dog” concept, each with their own distinct relational properties. The P-Graph makes this explicit by creating separate neurons for each instance.

Similarly, the addition of the “master” concept and the “has-a” relation revealed cyclic connectivity in the P-Graph that was not apparent in the semantic network. This hints at the presence of feedback loops and reciprocal relations that were latent in the original representation.

So in a sense, the P-Graph transformation is not truly “adding” new information, but rather making explicit certain structural and relational properties that were already inherent in the semantic network, but not directly accessible.

The loss of information when going from a P-Graph back to a semantic network can then be understood as a kind of “lossy compression”, where some of the rich emergent properties of the P-Graph are discarded in favor of a simpler, more compact representation.

This perspective aligns with the idea that the brain’s neural networks encode information in a highly distributed, implicit manner, giving rise to emergent cognitive phenomena that are not readily apparent from a localised, symbolic analysis. The P-Graph may be capturing some of this distributed, emergent nature of neural representation.

Of course, this is just one speculative explanation. The true nature of the information dynamics in these graph transformations likely involves complex interplays between the explicit and implicit, the local and the global, that warrant much deeper mathematical and empirical investigation. But the tantalising hints of emergent information in the P-Graph representation certainly open up exciting avenues for further exploration in the quest to understand the brain’s remarkable information processing capabilities.

GPT-4:

The phenomenon of gaining information when transposing from an original semantic graph to its dual (P-graph), and conversely losing information when reversing the process, can be intriguing but understandable within the framework of graph theory and knowledge representation. Here’s how this can occur:

  • Change in Structural Representation: In the original graph, nodes represent concepts, and edges represent relations. When transposed to a P-graph, the roles are reversed—edges become concepts and nodes become relations. This shift alters how information is structured and interpreted, which can reveal new insights or obscure existing ones depending on the direction of the transformation.
  • Highlighting Different Aspects: In a P-graph, by focusing on relations as nodes, there is an opportunity to explore and highlight the interconnections and dependencies among concepts in a way that the original graph might not easily allow. This can be seen as a gain in information because it provides a new perspective on how concepts are interrelated, potentially uncovering patterns or clusters that were not apparent before.
  • Loss of Original Context: When reversing the process from a P-graph back to the original structure, some of the relational nuances captured by having relations as nodes might be lost. The original context, where concepts are primary and relations secondary, might not preserve the intricate web of interdependencies highlighted in the P-graph. This loss of context and emphasis can result in a perceived loss of information.
  • Complexity and Detail: The transposition process might increase the complexity of the graph, as it could lead to a denser or more intricately connected structure. This increased complexity can encode more detailed information about the relationships between concepts, seen as an information gain. Conversely, simplifying this complexity during the reversal could strip away some of this detail, manifesting as information loss.
  • Bi-directional Mapping Issues: Not all transformations are bijective (one-to-one and onto); some may be surjective (onto) or injective (one-to-one) but not both. A non-bijective transformation can lead to situations where the reverse transformation does not perfectly recreate the original graph, leading to information loss.
  • Semantic Nuances: The P-graph might capture and emphasize certain semantic nuances by the very nature of its structure—e.g., more direct representation of relationships as nodes might make some semantic connections more explicit or easier to analyze. This specificity can be diluted when transitioning back to the traditional structure where such nuances might be implicit rather than explicit.

This gain in information when moving to a P-graph and loss when reverting can be critical in applications where the structure and dynamics of knowledge representation are important, such as in cognitive science, AI, and data analysis. This transformative effect underscores the importance of choosing the right structure for the right task in computational and theoretical applications.

(To be continued…)

Illustration by DALL·E (+PJ)


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.