
Linguistic
Illusionists
AI doesn't understand a damn thing.
Editor’s Note: A year ago, we talked about the rise of the so-called EQ internet, and now we are excited to share a perspective from a founder who is actually building the damn thing.
AI has a problem: it can sound intelligent, but it doesn’t understand a damn thing, not even with so-called “reasoning." We’ve spent years marveling at the fluency of Large Language Models (LLMs), watching them churn out perfectly structured paragraphs, draft polite emails, and even mimic human warmth. But here’s the reality—LLMs are linguistic illusionists, dressing up statistical predictions as understanding. And that’s where the hype stops.
The Great AI Deception
People keep asking, "When will LLMs become empathetic?" The answer is simple: never. Not as long as we’re relying on black-box models trained to predict words, not meaning. LLMs don’t feel, they don’t reason (unless you think humans reason by telling themselves to think step-by-step, then meticulously go through every step in their thoughts for 30 seconds, and then form a response), and they certainly don’t care. They spit out responses based on probability, not insight. You can ask them for life advice, and they’ll generate something that sounds right—but without any real understanding of your situation, your motives, or your emotional state.
Let’s be clear: AI-generated empathy isn’t empathy. It’s a script. It’s a formula. And the moment you need real understanding, real nuance, real depth, these systems crumble. That’s because empathy isn’t about mirroring the right words—it’s about knowing why those words matter.
The Missing Layer: Real Emotional Intelligence
If we want AI to be truly useful in high-stakes environments—hiring, leadership, relationships, mental health—it needs more than linguistic gymnastics. It needs real emotional intelligence (EQ). That doesn’t mean programming in "compassionate" responses or tweaking outputs to sound more human. It means AI must be able to interpret personality, motivation, psychological states, and behavior over time.
LLMs can’t do that. They don’t track human patterns, they don’t learn from long-term interactions, and they certainly don’t recognize why someone might be saying one thing while meaning another. That’s what EQ-driven AI solves. Not by generating better generic responses, but by tailoring interactions to the individual—based on psychology, not word probability.
Why This Matters Everywhere
Without EQ, AI is useless in the places where human understanding actually matters. HR tech powered by LLMs? That’s just glorified keyword matching, completely missing whether a candidate fits a team’s culture or will thrive in a company’s work environment. AI-powered therapy chatbots? They can parrot self-help advice, but they can’t detect when someone is on the verge of burnout or spiraling into a depressive episode. AI in customer service? Sure, it can say "We understand your frustration," but it doesn’t understand anything.
The world doesn’t need more artificially polite chatbots. It needs AI that actually understands people—that can read between the lines, identify underlying motivations, and adapt dynamically. Otherwise, we’re just building fancier parrots that sound good but know nothing.
The Future: AI That Gets You
The next wave of AI isn’t about making LLMs sound more human—it’s about making AI think more human. That means moving beyond black-box predictions and into explainable, psychology-based models that process human emotions, intent, and long-term behavioral patterns. It means AI that doesn’t just summarize data but can tell you why someone is likely to succeed in a role, why a team is struggling, why a customer is about to churn.
We run events every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.