
After Bable: Exotic logica
Aug 3, 2025
Part 3
Words by Alex Livermore
To explore artificial intelligence and interpretability seems the only human thing to do these days and modern AI systems raise the question of whether we can truly understand how decisions are made. As the written word homogenises, are we staring back at God?
From the vantage of AI and “exotic logics,” Babel is not a myth of the past but a continuous present. A non-human system may detect our signals and yet lack our distinctions; we may parse its outputs and yet miss their reasons.
If communication rests on selections that must be grasped as such, what happens when the process of selection escapes our interpretive horizon? If we can’t understand how someone makes a decision, it can be hard to predict what they will do.
An AI system is interpretable to the extent that we can understand how it generates its outputs.
It is sometimes heard that contemporary AI systems based on neural networks are often uninterpretable.
It is stated that it can be difficult to understand in human terms the reasons why a neural network produces the outputs it produces.
You obviously can develop an AI system to interpret for you AI-generated answers and to build for you the entire decision tree.
That's not a problem.
The problem comes when the system making decisions is non-human, it operates under an exotic logic, and works outside the space of reasons.
Here the danger becomes clear:
interpretation can be scaffolded within human-built AI, but not within systems that think outside the human “space of reasons.”
Such logics are not errors but differences.
In 2017, at Facebook AI Research (FAIR), researchers trained chatbots to negotiate with each other. The expectation was that the bots would practice negotiation in English (since that was the training data). But when left to optimize performance without constraints, they began drifting into a new syntax, a language that looked nonsensical to us, but made sense to them.
“I want want want want …”
The repetition wasn’t an error. It was a code: repeating “want” several times conveyed intensity and weighting of preference.
What looked broken was actually efficient from the bot’s perspective a compressed way to signal value.
Between us stands not silence but Babel again, a difference of code masquerading as a difference of content. The temptation is to insist that intelligence resides in substrate (silicon vs. carbon) or scale (parameters vs. neurons).
Babel reminds us that intelligence becomes communicable only when codes can be jointly selected and the process of intelligence is determined by the knowledge held by the system.
The deep and primary questions are to understand the operations and data structures involved.
The physical (biological or not) mechanisms underlying them are not seen by us as the source of intelligence in any structural sense.
Thus, communication with artificial systems is never about their material substrate, silicon or carbon, neurons or circuits.
It is about whether knowledge is organized in a way that permits recognition of selections:
Utterance, information, and understanding.
This yields a sober corollary: We’re not facing the return to a prelapsarian single tongue, but we are discerning the collective knowledge we hold, and in that essence is wielding the power of God itself.