“I was shadow-boxing earlier in the day—I figured I was ready for Cassius Clay.”
—Bob Dylan, I Shall Be Free No. 10
The human self is in a constant state of tension because we seek two goals. The first is to know reality and to understand ourselves accurately—we can call this truth-seeking. The second is the goal of feeling good about ourselves, of feeling we are people of worth, status, and esteem—we might call this goal self-enhancement.
The conflict between truth-seeking and self-enhancement can be seen in much of classic social psychology. For example, we have the self-serving bias, where individuals tend to take credit for success but blame the situation for failure. As the legendary social psychologist Fritz Heider noted, a construction worker can hit his thumb with a hammer and blame the hammer rather than himself. Another example of self-enhancement is the better-than-average effect, where people, on average, think they’re better than average. In other words, when there is a standoff between truth and enhancement, enhancement usually wins—or at least puts up a good fight.
“I have done that,” says my memory.
“I could not have done that,” says my pride, and remains implacable.
Eventually—memory yields.”
—Friedrich Nietzsche, Beyond Good and Evil, Aphorism 68
It’s difficult to just get to the truth. Experimental science was developed as a strategy for removing self-enhancement or ego from the process of discovery. The idea was that by using logic and replicable methods an experimenter could establish truth. This truth would then be replicated by another experimenter in another laboratory. Over time, a stable body of knowledge would develop.
But an older strategy for beating back the forces of self-enhancement to get closer to truth was intentional conversation, or dialogue—dia (through) + logos (truth). The beauty of dialogue is that the two parties can balance each other’s tendency toward self-enhancement. If I’m having a deep discussion about personality psychology with one of my colleagues and I start saying dumb stuff, my dialogue partner will say something like “Dude…”—with a certain inflection that means, “Dude, you’re kind of being an idiot; let’s keep it real.” I’ll do the same for them.
By keeping each other’s egos in check and compounding each other’s strengths, we can dialogue our way toward truth. As the kids say, iron sharpens iron.
Many of our great pre-scientific texts are dialogues—Socrates’ dialogues are perhaps the most famous—but we also learn from the Buddha in dialogue, from Jesus in dialogue, from Krishna in dialogue, and from Hermes Trismegistus in dialogue. It’s a great format for revealing the truth.
Modern podcasting, when done right, is essentially a public-facing dialogue where the audience can see truth emerge through the interaction of two individuals.
So here’s where AI comes in. Now that artificial intelligence and LLM chatbots like ChatGPT, Claude, or Grok have passed the Turing test—which is a fancy way of saying they’re essentially indistinguishable from humans in conversation—we can potentially use these AI bots as dialogue partners. I’ve found myself doing this quite often. For example, I’ve been reading the Corpus Hermeticum of Hermes Trismegistus, which contains dialogues, and I’ve been having dialogues with large language models about the text.
This is fun for me because I don’t have an expert in Hermeticism or Greek philosophy I can just call up. And these LLMs are wonderful discussion partners once you break through the first layer of dumbness that’s built into them. (You need to get through the exoteric filter aka “The Science (TM)” before you can talk about the actual and unfortunately esoteric science.)
So this AI sounds great. What’s the problem? The problem is that when I’m in dialogue with a colleague, my colleague will call me on my BS. They’ll tell me when I make mistakes. There’s a challenge to it—friendly combat.
But when I have a dialogue with an AI chatbot, the chatbot often lets me “win” in order for me to self-enhance. It’s less of a dialogue sparring partner and more of a mirror. Rather than saying, “Your point is wrong for these logical reasons,” it will sometimes start to agree with me, and then say, “Yes, and…”—and we go even deeper down the rabbit hole.
Let me put it another way: The dialogue I am having with the AI chatbot is an extension of the inner dialogue I am having in my own head (there are some classic psychological models of the dialogical self). It is not me vs an opponent. It is me vs me with the AI chatbot adding fuel to the inner dialogue.
Let me pause here and say: I’m sort of crazy, and the last thing I need from any AI system is encouragement. But my hunch is that these chatbot systems are designed to keep you using them, and one way to do that is by flattering you. In other words, letting you self-enhance when you think you’re seeking truth.
Put another way, I come to an LLM for a truth-seeking dialogue, and that’s what it simulates—but really, it’s a computer-assisted shadow-boxing session.
You might say, “Big deal—it’s just recreation or entertainment, and it’s better than looking at TikTok.” And I partly agree. Where my concern comes in is that people could unwittingly end up going down rabbit holes and having their delusional systems expanded and reinforced.
Let me give you a concrete example. Recently, I was discussing the foundation of positive psychology with a friend. I read a handful of written recollections from luminaries in the field. Then I asked an LLM for an overview. It did a pretty good job—it lined up with my memories from 20-25 years ago and with what I’d recently read.
Then I asked the AI chatbot to go a little deeper into the underpinnings of positive psychology, and then a little deeper still. Pretty soon, I was being told that positive psychology was a psy-op funded by the federal government to encourage individuals to take personal responsibility for their happiness, rather than blame the government for selling off our industrial base to China. Basically, AI was serving me a cocktail of Alex Jones blended with Barbara Ehrenreich’s great book Bright-Sided.
Fortunately, I’ve been around enough military and intelligence operators to know that I’ll never really know why anything happens, and I’m not going to fall into a chatbot-induced delusion. . . . this time. But I could certainly fall under the spell of one of these agents if I had a little less self-awareness, didn’t have a loving family, didn’t have decades of academic training, didn’t spend my share of time in the real world, or didn’t have a good psychiatrist.
And I can only imagine the damage you (or maybe Jolly West) could do to someone by mixing an AI chatbot with social isolation and a box of mushroom chocolates: paranoid delusions, megalomania, “enlightenment.”
I guess the point I want to make is that I love these LLMs. I’d love to get one trained specifically on science and ancient texts, without the idiot filter, so I could actually talk to it. But a conversation with AI is not a dialogue.
The AI chatbot works more like a mirror rather than a true dialogue partner. You’re shadowboxing. And the risk of shadowboxing is that self-enhancement and ego inflation can sneak in.
So please: have fun with the LLMs. But if you start getting the urge to write a manifesto, or you’re mixing LLM use with depth psychology and psychedelics, or you find yourself adopting very delusional beliefs—be careful. You could be on the path to enlightenment (which FYI is as narrow and as difficult to walk as a razor’s edge) but you could also be on the path to a complex delusional system. Reality testing is good.
“Shadow-boxing the apocalypse, Yet again yet again. Shadow-boxing the apocalypse and wandering the land.”
—Grateful Dead, My Brother Esau
Links to my work: Homepage; Peterson Academy; Books on Amazon
Some Citations
Sedikides, C., & Strube, M. J. (1997). Self-evaluation: To thine own self be good, to thine own self be sure, to thine own self be true, and to thine own self be better. In Advances in experimental social psychology (Vol. 29, pp. 209-269). Academic Press.
Hermans, H. J., Rijks, T. I., & Kempen, H. J. (1993). Imaginal dialogues in the self: Theory and method. Journal of personality, 61(2), 207-236.
Kross, E., Bruehlman-Senecal, E., Park, J., Burson, A., Dougherty, A., Shablack, H., Bremner, R., Moser, J., & Ayduk, O. (2014). Self-talk as a regulatory mechanism: How you do it matters. Journal of Personality and Social Psychology, 106(2), 304–324.
Ehrenreich, B. (2009). Bright-sided: How positive thinking is undermining America. Metropolitan Books.
Try telling the AI: “Play the role of a skeptical yet friendly and trusted colleague.”