True story! It’s 1979. I am nine years old, and another little boy says to me, in math class, “You’re so smart!”
But he means the bad kind. I can tell by the scowl, the waggling head, and the falling tone of voice.
This is the first I’ve heard that there is a bad kind of smart!
My fourth grade class, not having the Internet, does not know the word “geek,” so he is improvising. And just to prove him right, I am not anxious about his opinion—I barely know the kid, not exactly my type—but instead I am consumed wondering how he pulled off this effortless alchemy of meaning.
How did he transmute a compliment into an insult, and how did I understand?
Sure, it was the body language, somehow. But how? No one ever taught us to adjust the meaning of “smart” for scowls and head waggles. It’s certainly not in the dictionary, which every nine-year-old knows is where all the meanings live. So how did we both read those tea leaves the same way, to arrive at the same meaning?
Guilty as charged, I was just starting to learn about computers, and the holy grail was “artificial intelligence,” in which a computer would think like a human, in some way that no one could quite articulate. So I was fascinated watching myself think and wondering how a computer could think. It couldn’t really, could it?
The smart people (the good kind) thought they would program a computer with a dictionary of all the words, and then it would understand by logically fitting all the meanings together. But how do you write a dictionary without… words… for something that doesn’t yet know the words? I pondered that all the time, watching myself think, trying to see where my words were kept and what their meanings were written in.
I never found it.
Thirteen years later, studying machine learning at Stanford University in 1992, I thought of something even better.
And it is not about computers. It is about communication itself, about interpreting and contextualizing and projecting, about reading tea leaves and inkblots, even oracles.
It is the story of how symbols get their meanings.
Awareness Recursion Theory (ART)
People understand each other by asking,
If I were that person, in that person’s situation, doing what that person is doing, what would I have to be experiencing?
That’s it! Work out the implications of that—all of them—and you have the entire theory of symbols, meaning, and everything I claimed above.
If you’re underwhelmed, think of deriving Calculus from 1+1=2. Just because I can say it in one sentence, and the starting point is obvious, that does not make it easy or unremarkable. It is quite the opposite, on both counts.
We build working models in our heads of how others behave, and we match them to our own experiences and behavior. We work backwards from their behavior to guess their experience, and we compare to how we would behave if we were having that experience.
The twist? Their experience includes observing us! There’s an entire model of us inside them, and soon we start posing for ourselves in that mirror. If we’re not careful, we’ll forget about them and just interact with our own reflection. It’s alright, happens all the time.
This does not stop at Level Two. If I can fake to the left and then go right—to fool my model of you—then I can fake faking to the left and really go left—to fool my model of you, in your model of me, in my model of you. We can go as deep as we care to, models within models, each pair a mirror, each mirror an opportunity to spy and a desire to look good.
This mutual awareness, running nested models of each other, is “Awareness Recursion.”
Who’s underwhelmed now? 😆
I haven’t even thrown words into the mix yet. They don’t change anything. They just make it all more abstract. “Why would I do what he did?” becomes “Why would I say a word in that tone of voice with that head waggle, and btw the word was ‘smart’?” See how small a role the dictionary plays? Until you read a book, that is, in which the whole world is made of words. (Or is it? Who’s the author? What are his beliefs? Etc.)
And we see everyone as an inkblot, because our models aren’t very good. We never get to see inside anyone else’s head, so we have to build them out of bits and pieces of ourselves and feel for every bit of intuition we can muster, wringing every drop of a trace of a hint from those subconscious neural connections. Understanding is never a science, always an ART. Some of us are mildly good at it, others not so much.
Let’s walk through it
I hear my classmate say, “You’re so smart.” The recursion begins:
Why would I say those words, in his position? That is, what would be my inner state if I said that outwardly? I would be complimenting someone. But as I run that through my model of a classmate, there is a huge mismatch: they wouldn’t look or sound like that while complimenting me. Something is off.
What inner state would I have if I scowled like that and waggled my head and dropped my tone? I would be insulting someone. I feed that intention back into my model of him, and it fits. He’s insulting me with the word “smart”!
I could conclude that he misunderstands the word “smart,” that he doesn’t know it is a compliment. I could politely suggest that he pick a real insult instead, maybe “ugly.” Imagine the first generation of “ART-ificial intelligence” naïvely doing this! LOL, not good enough.
A real human easily sees that a classmate is unlikely to be wrong about such a common word: that inner state is not a good match versus using the word intentionally in some other way. Could his inner state be sarcasm? Nope, that would mean he’s calling me dumb, which is not even close to a vulnerability for me, in fourth grade math class. He would have to be really bad at insults.
See how we’re escalating here? I just constructed his model of what I would feel insecure about, which requires his awareness of how I think others see me. That means I built a model of his model of my model of other fourth graders’ models of me. And I ran his sarcasm through it, and it was a bad match. He would know that everyone knows math is totally my thing and that I’m aware of this—obviously—so to call me dumb like that is just dumb. Even for him. So it can’t be sarcasm.
That’s enough detail, so I’ll skip to the end:
He feels insecure about math (his inner state) and had probably just heard me praised by the teacher for saying some know-it-all answer about cross-multiplication (my outer state), so his model of me contains a hideous reflection of himself. That’s projection, guessing my inner state of being proud about math and then applying it to his model of me looking at him.
Plus, with fourth grade on the cusp of coolness, he is probably noticing that I don’t have that, and it’s got something to do with being good at math instead of baseball (“smart”). None of us fourth graders are able to articulate this, but it is brewing all around and beginning to creep into all of our inner states, culminating in the word “geek” two years later.
If only he knew, I couldn’t have cared less. His reflection was fine, just another kid who hated math and would always be better than me at baseball.
Where this leads
A non-exhaustive list of things that become clear if you ponder ART long enough:
Body language is not a primitive form of words. Words are an abstract form of body language.
Words are the vehicle; meaning is the payload. They are two separate things, though many pairings are so common that we see them as one.
Intelligent communication includes improvising new meanings through introspection and awareness of the other, which is a process, not a dictionary or a data set.
We can only communicate with things we can relate to, on the level that we relate to them.
Culture is not the things most people believe. It is the things most people believe most people believe.
Propaganda can succeed by changing beliefs about what others believe, even without changing anyone’s own beliefs. This is a top reason why they gaslight us.
This process can be programmed into machines, but it requires a radically different approach than anything we’re doing now. Mammals seem to share some of this programming, as do certain other animals. Our pets are bred for it.
Mirror neurons seem to be involved. I first heard about them maybe a decade after formulating ART, but I’ve never read up on them. The little bit I know resembles the training algorithms we would end up running if we built a neural network based on ART. The training would run constantly, by the way, not as an up-front process.
Speaking of math
All of this can be expressed as a mathematical framework for systems of inputs and outputs that can “put themselves in each other’s shoes.”
A simple case:
G(f(y)) - y = d,
where
y is “his” outer state,
f is my inversion of his model,
f(y) is then my guess of his inner state,
G is my model of myself,
G(f(y)) is what I think I would do in his situation,
d is how I think my behavior would differ from his.
Eigenstates of functional composition are what we end up looking at, in great detail. That means the system of G and f gives you back whatever you put into it, when d is zero, the measure of how far we are from an eigenstate. It’s not very interesting with only two functions, but that is just the beginning.
Ironically, I don’t know enough math to take this very far.
But a lot could be learned from simple computer simulations. I made one in grad school, in which a genetic population of robots evolved a common language of pixel patterns interpreted as ice cream flavors. Yeah, you had to be there. Maybe I’ll find an interesting way to share it.
Manifesto
Awareness Recursion Theory is how communication works, how symbols acquire meaning, how body language speaks, how animal sounds emote: what would be my inner state if I were using those words, making those moves, uttering those sounds? Invert the models, theirs and ours, compare and contrast. Meaning does not come from a dictionary, though it’s a convenient place to save definitions for next time.
With ART, AI might have a shot, rising from its own ashes as Artificial Inkblots. LLM’s (ChatGPT et al) are excellent at language but terrible at intelligence. Use them for linguistic heavy lifting—reductions, filters, mashups—always double-check their claims, and never play Twenty Questions with them! It’s not what they do.
This is much more than game theory. It is the theory of the players conceiving of the game and having a reason to play it, giving meaning to its tokens, even “playing games” with whether or not they are playing—committing, withdrawing, buying time, pulling out, “Oh, is it my turn? I didn’t hear you.” To win the game, we need to be the last mirror, to have the deeper strategy, and be right about it! Or go random, which cannot be modeled.
ART underpins not only communication, symbolic meaning, and body language, but also cultural messaging, propaganda, gaslighting, and projection—mutual nested inversions in real time. It is how the market stays irrational, a bill becomes a law, and the law becomes an ass.
The death knell of traditional AI is that the meaning was never in the words. It is carried by the words, but parsing words to deduce meaning is like parsing cargo ships to deduce cargo. There are strong patterns we can lean on, but we’ll never be certain what’s inside the boxes. We can only second-guess the context, and we are easily fooled. We have to open the boxes.
We have to look in the mirrors, and there has to be someone looking back. Can a machine ever be that someone? Yes, I think it can. No, it will not have a soul or a consciousness. But it can know its own model and be fluent at embedding it in others. It will not seem human, but it will be genuinely smart (the good kind).
I’ve been watching Awareness Recursion in every crack and cranny of life for over thirty years. It is practical and entertaining. It illuminates manipulative behavior, in person or en masse. Try looking through its lens. Watch out for mirrors!
The good kind of smart
… is choosing CrowdHealth!
Join the crowd-funded community whose members believe that you believe that they will pay your large medical bills. And you believe it too. And everyone knows that everyone knows. Because it’s written down. In words. That they wrote knowing you would read them…
Interesting, feel like talking about it on a podcast?
Great article! My head is swirling with associations to Overton Windows and MK Ultra. Probably not your intention though!