Your brain thinks AI gets you. It doesn’t.

On the wall between the doors of John Hunt and Reg Lacaris’ offices at TBWA\Hunt\Lascaris, there used to be a poster that featured a maple leaf. I remember it well because whenever one had an audience with John, you’d wait on a chair opposite the poster. On it was a picture of a maple leaf and a block of copy that explained what the leaf represented to different people. It stuck with me, because it said that if you were a train driver, it represented danger. Now I think it might have been an ad for Canada Rail, but perhaps I’m hallucinating. (In fact, dear reader, I tried to find this poster to look at it again and had to trawl through a lot of motivational crap posters but, alas, no.)

The poster featured the immortal quote, penned by Anaïs Nin:

“We don't see the world as it is, we see it as we are”

I was thinking about these words again this week as something occurred to me over the holiday break.

We underestimate how much of ourselves we project into the responses of LLMs, or Large Language Models like ChatGPT and Claude.

Our brains are not just good at detecting patterns, essential for survival, they are actually hungry for ‘em, seeking them out.

There are countless stories of people seeing the face of Jesus in their morning toast, or animals in the clouds. In fact, the entire Zodiac is just ancient astronomers connecting the dots in the sky.

Why We See Patterns Everywhere

Humans are wired to find patterns. It’s how we survived as a species—spotting danger, making sense of chaos. It’s also why we see faces in clouds or stories in random events. When we talk to an AI, that same pattern-hunting instinct kicks in. We search for meaning, even when there’s none to find. You’re not just talking to a machine; your brain is working overtime to make the conversation feel real.

The Illusion of Understanding

GPT models aren’t thinkers—they’re glorified pattern machines. They don’t reason or have a clue what they’re saying. They just predict the most statistically likely response based on the words you give them. But because their answers feel so tailored, we humans start imagining there’s deeper reasoning going on. This isn’t AI being smart—it’s us projecting intelligence onto something that’s just a tool.

Why This Matters for AI Design

Understanding this cognitive quirk isn’t just “nice to know”—it’s the secret weapon for designing better AI experiences. Here’s how designers and developers can make use of it while keeping things ethical:

Build for Emotional Resonance Without Pretending It’s Human

People want to feel like AI “gets” them, especially in sensitive applications like mental health tools or customer service. But designers need to create systems that feel supportive without crossing the line.

Practical Tip: Use language models to provide empathetic tone and active listening techniques (e.g., “I understand this is difficult for you”), but make it crystal clear it’s an AI, not a human.

Example: Mental health apps like Woebot openly position themselves as bots, avoiding any illusion of human psychology at play.

Why it Matters: If you trick users into thinking they’re talking to something sentient, it can lead to misplaced trust—or worse, emotional dependency.

Put Transparency Front and Centre

People need to know where the boundaries are. If users think the AI “understands” when it doesn’t, you risk undermining trust in your product.

Practical Tip: Use clear disclaimers and user-friendly explanations about how the AI actually works.

Example: “This AI generates responses based on patterns in data. It doesn’t ‘think’ or understand like a person does.”

Consider adding an “info button” in interfaces that lets users see how responses are generated.

Why it Matters: The more informed users are, the less likely they are to misinterpret the AI’s abilities—or hold it responsible for something it can’t deliver.

Design for User Expectations, Not Fantasy

Stop selling science fiction. If users expect AI to be a mind reader or a therapist, they’ll be disappointed—or worse, manipulated. Set expectations that align with reality.

Practical Tip: Use AI to augment human tasks, not replace them. For instance:

In customer service, AI can answer simple queries quickly but pass complex issues to a human.

In education, AI can give personalized recommendations but shouldn’t replace a teacher.

Why it Matters: Overpromising leads to backlash. Think about all the criticism around “chatbots that don’t work.” Better to underpromise and overdeliver.

Leverage AI’s Strengths Without Overstepping

AI’s real strength isn’t in “understanding” but in its ability to process data at scale and provide tailored outputs. Use this to your advantage—but don’t frame it as something it’s not.

Practical Tip: Use AI to:

  • Detect patterns: Personalise content for users based on their preferences.

  • Predict needs: Offer recommendations or solutions based on past behavior.

  • Support creativity: Help users brainstorm ideas, like co-writing or generating design drafts.

Why it Matters: Focus on the value AI does bring instead of trying to make it mimic human thought. That’s not its job.

Prepare for Misuse and Manipulation

Let’s not sugarcoat it: some people will weaponise this pattern-seeking quirk to manipulate people. Designers need to think ahead and bake safeguards into the system.

Practical Tip:

  • Add guardrails: Limit AI’s ability to respond to exploitative prompts (e.g., scams, propaganda, or emotional manipulation).

  • Audit for misuse: Regularly review user interactions to identify problematic trends or misuses of the tool.

  • Education campaigns: Teach users how to spot when AI is being used unethically, like in fake news or phishing.

Why it Matters: AI tools that prey on people’s trust undermine the entire industry. Designers have a responsibility to get ahead of this.

The Bottom Line

If designers lean into human psychology, they can make AI that’s both useful and ethical. But the moment we start pretending AI is “human,” we’re playing with fire. Transparency, realistic expectations, and leveraging AI’s real strengths are the keys to building systems that actually help people instead of misleading them.

There might be a time soon where we reach AGI or Artificial General Intelligence but for now even the most powerful models are being bench tested on problems with deterministic outcomes, like coding and math.

AI can fake a great conversation, but it’s still a tool - not a mind. It’s what makes it great for approximating personality and creating summaries and being a great brainstorming partner. But it is still just that.

It has zero empathy.

Let’s design it that way and therefore make it even more useful to people who are going to use it.




Next
Next

AI tools we’d like to see in 2025