AI Lab

AI Lab

AI Lab

What Kind Of Empathy Does Your AI Really Need?

Published on May 13, 2025
Contributors

Everyone seems to be working on making AI more human. Friendlier. More understanding. More empathic. But ask ten teams what “empathic AI” means and you’ll likely get ten different answers. One team’s AI is supposed to be empathic because it mirrors your feelings. Others will say it's just about adding some friendly language without too much fluff. And yet other teams say empathy is about acknowledging what the user is feeling and normalizing that. 

So, who’s right? Well, that depends on what you’re building.

It’s easy to think of empathy as “I get how you’re feeling” or a well timed “I’m sorry to hear that”. But empathy is already a broad and layered concept for humans, and building it into machines adds different complexities to the mix. And before we jump into making AI that’s not just emotionally intelligent, but actually helpful and ethical, we should start with what empathy really means.

Psychologists often split empathy into three broad types:

  • Cognitive empathy: understanding someone’s experiences

  • Affective empathy: the capacity to feel something because someone else is feeling it

  • Motivational empathy: wanting to help because you care about the other person’s wellbeing

And even those categories are debated. Some researchers argue that too much affective empathy can actually hinder good decisions as people may internalize emotions that aren’t theirs. Others say the motivational part is the only one that really matters as it goes further than merely understanding or feeling someone’s pain and instead moves people to do something about it. If we haven’t even reached consensus on what empathy is in humans, it's no wonder AI teams struggle to define it for machines.

But instead of only defining what empathic AI really means, Schaich Borg and Read (2024) argue that it’s more about figuring out which elements of empathy matter in your specific use case [1]. Empathy – like many things in behavioral science – isn’t a one-size-fits-all. What works in one context may completely flop – or worse, harm people – in another. AI doesn't always have to act like a therapist. Sometimes, responding in a timely, accurate, and helpful way may be the most empathic thing it can do.

What empathy looks like depends on the job

To make this a little less abstract, the paper covers three examples of AIs* in medicine; a field where empathy isn’t just a nice-to-have. It impacts whether people trust advice, take medication, or even feel safe enough to ask for help. These cases highlight how empathic AI can look very different depending on the use case.

* In the paper, “AIs” is broadly used to refer to any artificial system built to interact with and respond to humans in a way that may be considered empathic, including but not limited to chatbots.


  1. The AI Answerer

This is the kind of AI that answers medical questions. Its job is to share information clearly (leaving jargon at the door) and help people understand their health without making them feel judged or dismissed. More like a friendly pharmacist than a therapist.

Example: A chatbot on a health website that helps someone figure out whether their symptoms are worth seeing a doctor about, without making them feel embarrassed for asking.

Empathy here is about tone and approachability. The AI doesn’t necessarily have to mirror feelings or show deep emotional concern. It just needs to make people feel safe, respected, and comfortable enough to ask uncomfortable, potentially sensitive questions. 


  1. The AI Care Assistant

Now this one supports patients managing daily routines – especially users who may be older,  cognitively impaired, or otherwise vulnerable. These AIs are tailored to specific use cases the patients need help with. Think: medication reminders, light conversation, mental stimulation, or even help navigating home. 

Example: A companion robot that reminds an older adult to take their pills and chats a little to offer some companionship.

Even though the interaction is task-oriented, the AI needs to be able to be adaptive and sensitive to each individual’s needs and vulnerabilities and respond in a way that feels comforting but not patronizing. It’s more like functional care delivered in a considerate way.


  1. The AI Care Provider

This is the most emotionally demanding use case. These AIs provide therapy or long-term support, like for mental health care or management of chronic conditions. It goes beyond providing information, assisting and being helpful, with the goal of making users genuinely feel cared for.

Example: A digital therapist offering ongoing support to someone with a chronic depression, aiming to build trust and emotional connection over time.

The (emotional) stakes are higher here. Users should feel emotionally seen, supported, and respected. The AI should express concern, emotional resonance, and a sense of commitment to the user’s wellbeing. When done well, this kind of care will improve health outcomes. But as the stakes are raised, unmet expectations of care can be misleading or even harmful.

It’s the same word – “empathy” – but it can have very different design implications.

Finer empathic ingredients

Just like we wouldn’t use the same behavioral nudge in every situation, we shouldn’t default to one ‘empathy package’ for all systems. In some cases, mirroring emotions might help build rapport. In others, it might feel uncanny. In their paper, Schaich Borg and Read (2024) describe various ‘finer’ empathic capabilities that also matter [1]. I recommend you to read the full paper, but some examples are:

  • Accuracy: correctly interpreting what someone is feeling or thinking

  • Self-other differentiation: our ability to distinguish between the other’s emotions and our own

  • Motivation to act: motivated to respond in appropriate ways to relieve the other person’s distress

These nuanced ‘fine cuts’ of empathy are especially important when we compare human empathy to artificial empathy. Humans are often given the benefit of the doubt: we’re considered empathic if we try, even when we get it wrong. We like to think human mistakes in interpretations still come from a genuine place. If you misunderstand a friend’s sadness but genuinely care and want to support them, you’re still considered empathic. 

But AI doesn’t get the same leeway.

When users know an AI doesn’t actually feel anything, they judge its empathy based more on accuracy and outcome. Does it recognize the emotional need? Does it respond appropriately? Does it offer the right kind of support for the moment? That bar is higher. And it’s one reason why building effective empathic AI isn’t just about tone or mimicry. It’s about matching the right empathic capabilities to the right context.

Artificial empathy isn’t about pretending to understand or feel. It’s about creating systems that respond in ways people experience as supportive, respectful, and aligned with their needs.

When empathic becomes a bit pathetic

Last week, I was preparing for a keynote presentation and used ChatGPT’s voice function – you know, the one where it talks out loud so you can have a ‘real’ conversation. I asked it to act as a coach and help improve some of the talking points I was throwing at it. But instead of helpful feedback, I kept getting comments like “Wow, you nailed it!”, “What an insightful way to phrase that!”, or “Your audience is going to love this!”. Ehm.. thanks? It got to a point where I didn’t even feel flattered anymore. It was just too exaggerated, annoying, and unhelpful. I wanted a coach, not a cheerleader.

Apparently, as OpenAI wrote in an update, the latest GPT-4o update was supposed to make the “model’s default personality feel more intuitive and effective across a variety of tasks”. But instead, they “focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT‑4o skewed towards responses that were overly supportive but disingenuous” [2] – also referred to as sycophantic.

Empathy isn’t always about being nice. It’s about being useful in the right moment. And to support our trust in AI, its tone and responses should align with the user’s intent (compared to offering exaggerated praise by default). If I’m asking for coaching, I want insight. If I’m sharing something vulnerable, I want care. Getting that alignment wrong makes the AI feel out of touch – even if it ‘means well’.

Empathy as a design choice (not a general vibe)

Empathy is a great topic for late night philosophical debates. But in product design it is a strategic decision that can make or break your product. Because how we define and design for empathy directly shapes how users experience, trust, engage with, and stick to the tools we put out there. Still, there is no strong reason to just bake empathy into every feature – unless you want to burn through your team’s time and product budget fast. 

The good news is: getting clarity on what kind of empathy your AI needs will save you time, effort, and the much bigger headache of fixing a feature that confused users, broke trust, or accidentally made things worse.

Being intentional about empathy can prevent teams from building features that sound nice on paper but don’t serve a real function – or worse, upset users or do harm. Schaich Borg and Read (2024) suggest a helpful reframing: treat empathy as a set of functional capabilities, not a general vibe. That means specifying which elements of empathy your product needs to support the user – and which it doesn’t. Should it detect emotion? Mirror feelings? Communicate concern? And what does the user need to feel? 

Getting clarity on what kind of empathy your AI needs will save you time, effort, and the much bigger headache of fixing a feature that confused users, broke trust, or accidentally made things worse.

So before optimizing for warmth or ‘human-likeness’, ask yourself:

  • What kind of empathy is needed for this use case?

  • Which empathic capabilities are essential, and which might be misleading or unnecessary?

  • What’s at stake if empathy misses the mark?

These are not just philosophical questions. They’re product questions. Strategic ones. And they’re not meant to slow you down. They’re there to help you avoid the much slower, messier work of fixing things after they go wrong.

Final thoughts

If there’s one big idea I’m taking away from this article, it’s this: Artificial empathy isn’t about pretending to understand or feel. It’s about creating systems that respond in ways people experience as supportive, respectful, and aligned with their needs. Not because the AI feels anything, but because it’s designed to show up with the right kind of care, in the right context, for the task at hand.

P.S. Curious to hear more from Jana Schaich Borg (one of the authors of the paper referenced)? She joined our Behavioral Design Podcast to talk about building moral AI! I may be slightly biased, but I highly recommend giving this one a listen.  

Building smarter AI means knowing when to dial empathy up or down. Need help applying this in your product? Feel free to get in touch or schedule a call directly.

References

1. Schaich Borg, J., & Read, H. (2024). What Is Required for Empathic AI? It Depends, and Why That Matters for AI Developers and Users. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 1306-1318. https://doi.org/10.1609/aies.v7i1.31725 

2. OpenAI. (2025, April 19). Sycophancy in GPT-4o: What happened and what we're doing about it. https://openai.com/index/sycophancy-in-gpt-4o/

Share