Godel’s AI
Implication of Godel’s theorem for the development of AI
Introduction
This article is a speculative exploration which attempts to connect ideas from logic, physics, and artificial intelligence. It doesn’t offer certainty but rather is a conjecture which I hope might provoke deeper thought. The central question is this: If our universe is governed by a formal system that is internally consistent, are we limiting AI’s potential by training it exclusively on data derived from this one formal reality?
Kurt Gödel’s incompleteness theorem showed that even the most logically rigorous systems contain true statements that cannot be proven from within. If our universe is such a system, there may be truths about it that are inaccessible to both humans and machines trained solely on its output. But what if we trained AI on alternative formal systems; worlds with different axioms, unfamiliar logics, or altered causal structures? Could such an AI reveal aspects of our own reality that lie beyond our intuitive or cognitive grasp?
Reality must be formal mathematical system
A formal mathematical system is a rigorously defined structure built from a set of foundational assumptions, known as axioms, alongside a collection of symbols and logical rules. These elements are used to derive theorems; statements that are proven within the system itself. Everything within the system operates according to strict syntactic rules, without necessarily relying on the meaning of the symbols. The goal of a formal system is to maintain internal consistency, where no contradictions can be derived, and, ideally, completeness, where every truth within the system can be proven from its axioms.
Although the laws of physics are derived from empirical observation and inductive reasoning, they must, through successive discoveries and refinements, converge toward an underlying reality governed by a fundamentally self-consistent system of rules. This isn’t to argue that our current understanding of physics is a formally consistent system, but rather the underlaying surface of reality must be a self-consistent system if they are to avoid contradictions and absurdities. To think of reality as a formal system is not to ignore its complexity, but to acknowledge that its consistency makes science possible.
If our reality lacked this kind of internal self-consistency, we would experience breakdowns in the coherence of events in the world around us. Objects might simultaneously appear and disappear or travel backward and forward in time in ways that defy logic. We might see paradoxes emerge, like a particle being in two contradictory states that cannot be reconciled by any observational framework. Technological systems could malfunction unpredictably, as the equations used to design them would yield contradictory results. Yet the universe, as we experience it, offers every indication of coherence and being the product of a lawful and logically governed system.
Godel’s Incompleteness Theorem
Gödel’s incompleteness theorem is one of the most profound results in mathematical logic and the philosophy of mathematics. In essence, it shows that within any sufficiently powerful formal system, such as one capable of expressing basic arithmetic, there will always be true statements that cannot be proved using the rules and axioms of that system. Gödel demonstrated that no consistent formal system can be both complete and capable of proving every truth about the arithmetic of natural numbers. His first incompleteness theorem states that in such a system, there exist statements that are true but unprovable within the system itself. His second theorem goes further, showing that the system cannot prove its own consistency using only its own axioms and rules. This shattered the hope that mathematics could be founded on a single complete and self-verifying formal system and revealed that some truths inevitably lie beyond formal deduction.
Formal systems themselves come in many forms, each built from different axioms, symbols, and rules of inference, designed for different purposes. For example, Euclidean geometry is a formal system built on a small number of geometric axioms. Arithmetic, as formulated in Peano Arithmetic, is another such system. Set theory, such as Zermelo-Fraenkel set theory, forms the foundation for much of modern mathematics. There are also logical systems such as propositional logic and predicate logic, as well as computational systems like lambda calculus or Turing machines. These systems vary in expressive power. Some are designed for manipulating numbers, others for describing space, logic, or computation. Despite their differences, formal systems can often be connected or embedded within one another. For example, arithmetic can be represented within set theory and set theory can in turn be encoded in logical frameworks. These interconnections allow insights to travel between systems.
This brings us to an important philosophical and practical point: insights gained in one formal system can sometimes clarify or illuminate truths in another. For example, Gödel encoded statements about mathematics into numbers, a method called arithmetisation of syntax, so that logical statements could be studied using arithmetic. This allowed him to use number theory to uncover truths about logic and proof itself. In computer science, we often use formal logical systems to verify the correctness of algorithms, and conversely, algorithmic insights can suggest structures or limitations within mathematical logic. Similarly, certain truths about geometry become clearer when translated into algebraic terms, as seen in the field of analytic geometry. Cross-system translation often makes otherwise obscure ideas visible, because what is complex or unprovable in one system might appear simpler, provable, or even obvious in another. This interplay forms a powerful method of discovery, enabling humans to push the limits of understanding even when individual systems are bounded by incompleteness or internal limits.
Why might this be relevant to the development of AI?
Today’s most advanced artificial intelligence systems are trained primarily on vast corpora of human-generated text. Increasingly, they are also being trained on video and other multimodal data; images, audio, even the dynamics of interaction. This reflects a growing ambition: to give AI systems not only linguistic fluency but also a deeper sense of the physical, visual, and behavioural patterns that underlie human life. Yet, all of this training data, whether text or video, is ultimately derived from a single source, our shared reality. Everything we record, describe, or depict originates from our physical universe. This universe, as argued earlier, must itself be governed by a self-consistent formal system. Were it not so, were reality riddled with contradictions, then all our efforts to model, predict, or understand it would have failed. Science would not work. Technology would not function. The very success of our representations of the world suggests that, at its core, reality operates as the unfolding of a logically coherent structure.
And yet we also know that there is not just one formal system from which truth emerges. Some truths are more easily or more elegantly revealed in one formal system than in another. This suggests that truth is not always best approached from within the system in which it ultimately resides. It might be glimpsed more clearly through the lens of a neighbouring system with different foundations and different constraints.
This leads us to a provocative question. Could we train AI on data that is not derived from the observable world, but instead from a formal system that differs fundamentally from the one that governs our universe? If so, might that AI begin to perceive obscure but valid truths about our world, truths that are exceedingly difficult for humans to discover, precisely because our cognition evolved within and is shaped by the constraints of this particular reality? Unlike humans, an AI is not limited by evolutionary heuristics or a specific sensory framework. If trained on simulated universes that obey different axiomatic structures, it may begin to identify patterns, relationships, or theoretical possibilities that do not emerge naturally within our world, but nonetheless have relevance to it when mapped back appropriately.
We might imagine doing this by selecting axioms and logics to create a new formal system, akin to alternative rules to the universe, then simulating that universe. It’s the output of the simulation that creates a dataset emerging from an alternative formal system that we can use to train AI. The insights gained would then be used by AI trained on existing real-world data to search for their applicability in solving our challenges across a range of domains.
The implications of this are profound. We might one day build AI systems that can perceive truths of enormous utility to science, engineering, ethics, or metaphysics, truths we could not easily uncover ourselves. These could be new mathematical theorems, new models of physical law, or even new social arrangements that make sense only when viewed through a radically different logical structure. Because AI can be trained beyond the limits of human experience, it could become a bridge between formal systems, discovering resonances between models of reality that humans cannot intuitively access. Rather than simply mirroring our world, such AI might reveal its hidden structure by seeing it, in effect, from the outside.
Such an approach would be a fundamental shift in how we think about machine learning. No longer would AI merely digest the world we show it. It would begin to explore alternate spaces of logic, and through that exploration, help us better understand the one we inhabit. This would not only expand our technological capabilities but might also deepen our philosophical insight into the nature of truth itself.
Potential implications
One can make several speculative but reasoned guesses about the kinds of discoveries that might emerge if AI were trained on alternative datasets drawn from formal systems entirely distinct from our physical reality.
AI is currently trained on data sourced exclusively from our universe, text, video, sensory records, all of it reflects the formal structure of the physical world we inhabit. This data encodes truths filtered through the laws of physics, human cognition, and the socio-linguistic systems that have evolved within this physical context. This is where the possibility of alternative formal systems becomes intellectually potent. If AI were trained not only on representations of our own world but also on simulations or symbolic systems built from alternative axioms, it could begin to develop entirely new ways of modelling reality. These systems might involve time behaving differently, causality being restructured, dimensions varying, or logic departing from classical norms. Through exposure to such alternatives, AI could construct models of coherence and meaning that are unfamiliar to human intuition but remain mathematically rigorous and structurally valid.
Such an AI might be capable of projecting new insights back onto our world, highlighting phenomena that seem chaotic or paradoxical to us but that become legible through the framework of a different system. For example, consider physical constants like Planck’s constant or the fine-structure constant. These numbers are treated as given in our current understanding of physics, but we do not yet understand why they have the values they do. An AI trained on formal systems where those values differ, or where physical laws are inverted or modular, might identify common patterns across these systems that suggest deeper regularities; a kind of “meta-pattern” invisible from within any single formal system but detectable across many. These could point toward explanations for our own constants that are currently beyond our cognitive reach.
In mathematics, such an AI might uncover new connections between algebraic structures and geometrical constructs by cross-analysing how those relations behave in systems with different axiomatic constraints. It might propose new theorems that, while unprovable in Peano arithmetic, map directly onto problems in our own world, say, problems of cryptography, network topology, or quantum computing—offering elegant solutions that no human would have reason to formulate. Or it might find bridges between apparently unrelated areas, such as number theory and biological morphogenesis, by spotting analogical structures across the logics of simulated systems.
Even in areas like ethics, AI trained on alternative formal logics might be capable of exploring moral landscapes that are inaccessible to human moral reasoning, which is shaped by evolutionary bias and bounded by survival heuristics. It could, for instance, simulate systems where agency is distributed non-locally, or where value is not assigned to individuals but to dynamic fields of interaction. From these systems, new ethical insights might be translated back into our own framework, challenging and expanding our concepts of justice, responsibility, or consciousness.
In this vision, AI becomes not merely a mirror of human knowledge but an engine of epistemic expansion; a tool that allows us to reach across the boundary of the known, into territories where reason still operates, but according to unfamiliar rules. These discoveries might initially appear alien, even meaningless, but over time could yield transformative breakthroughs. AI trained on alternative formal systems could become our telescope into the multiverse of logic itself.
Ultimately, training AI on non-reality-based formal systems would be an experiment not in simulation but in exploration of new worlds which cast shadows upon our own. It would test the hypothesis that truths about our world are sometimes best seen from outside it. In doing so, it may not just help us understand this world more deeply but help us see that world from the broader perspective of what could be. To train AI across formal systems is not just to expand its mind. It is, perhaps, to expand our own.
If this conjecture is correct, then there’s another implication. Any sufficiently advanced civilisation has a compelling reason to simulate alternative universes, in search of insights relevant to their own. How would we then know if we live in such a universe?