AI Lab

AI Lab

AI Lab

Chatbot Empathy is a Feature, Not a Fix

Published on May 19, 2025
Contributors

When you’re in a long-distance relationship, you quickly learn that not all communication feels the same. A message is nice. A phone call is better. But a video call?

That can make it feel like the other person is right there with you.

The psychological distance disappears.

You feel a little closer.

Can’t relate to long-distance love? Then maybe homesickness rings a bell. But what really matters is that how we connect affects how close we feel.

This is at the heart of the Social Presence Theory. First introduced by Short, Williams, and Christie in 1976, the theory focused on how different types of media influence our sense of another person being present [1]. Later scholars expanded this perspective, suggesting that presence is shaped not only by the medium itself, but also by the content and the way interactions unfold [2]. The more cues we get—facial expressions, tone, pace, responsiveness—the more we feel that someone is “there.” That feeling of presence shapes whether an interaction feels cold or connected.

So what does this have to do with chatbots?

In the context of chatbots, this perception shapes whether the conversation feels robotic or genuinely human.

The Chatbot Shift: From Rule-Based to Relational

Chatbots are everywhere. Even before the recent boom in generative AI, many companies were already using rule-based chatbots, especially in customer service. These earlier bots automated responses to routine queries like simple FAQs but were often rigid and limited in scope. They were functional but cold, incapable of expressing empathy or adapting to the nuances of a conversation.

But things have changed. With large language models (LLMs) at their core, chatbots now feel far more human. These bots are capable of perspective-taking, detecting and mirroring emotions appropriate to the context. Colloquially, they’re getting better at “putting themselves in someone else’s shoes” (or at least they make it appear so). They’ve shifted from passively providing information to actively shaping the conversation. As a result, their role has expanded well beyond help desks into areas like healthcare, coaching, and even companionship, prompting a shift in both use cases and expectations.

This shift raises new questions: What do more “socially present” chatbots mean for companies? And more importantly, what does it mean for users? Does this new wave of seemingly empathic bots actually improve customer experience and satisfaction?

What the Research Tells Us

Recent research by Juquelier, Poncin, and Hazée (2025) explored exactly this [3]. They ran three experiments to test whether and when empathy in chatbots improves customer satisfaction.

Study 1 asked a simple question: Do empathic chatbots make people more satisfied? The answer was yes. When a bot expressed care and concern through emotionally intelligent language—such as "I understand your concern about the refund" or "Don't worry, I'm sure I can help you 😊"—users reported higher satisfaction compared to when the bot used purely transactional responses like "For more information about insurance, write your questions here and I’ll look for answers."

Study 2 went further, asking why empathy increases satisfaction. It found that two things mediate this effect:

  • Perceived social presence: Users felt more warmth, contact, and “human-ness” in the empathic bot.

  • Perceived information quality: Even when the actual content was almost identical, users perceived the empathic bot’s responses as clearer, and more relevant.

Fig. 1: Empathic and mechanical customer service chatbot scripts, adapted from Study 2 (Juquelier et al., 2025).

Study 3 introduced time pressure to explore how it moderated the relationship of empathy on customer satisfaction. Participants in the high time pressure condition were told they had just five minutes to complete a refund request with the chatbot. When users were in a hurry, empathy didn’t help—it actually hurt. Under time pressure, users didn’t want warmth. They wanted speed and clarity. Social cues became distractions, and satisfaction dropped.

Together, the studies offer a clear insight: empathy improves chatbot experiences, but only when it fits the context.

Practical Design Lessons for Product and Strategy Teams

How should today’s product teams respond to these findings? As chatbots become more human-like, thoughtful design choices—and not just better models—will separate meaningful interactions from irrelevant ones.

1. Subtle Cues Can Go a Long Way

Small design features—like using first-person pronouns, acknowledging emotions, or even timing responses naturally—can increase perceived social presence and customer satisfaction without overhauling the existing bot. These cues make the interaction feel warmer and more responsive. But they also add to the length and complexity of the exchange, so they should be trimmed when speed matters more than sociability.

2. Empathy Should Be Context-Aware

Empathy isn’t a universal setting—it’s a dynamic state. Bots should learn to detect when empathy will help and when it might hurt. In service-oriented bots, this often means adjusting for time pressure. In relational bots (like mental health or companionship tools), the boundary might be emotional overload, where users are already overwhelmed and need straightforward simplicity, or perceived overfamiliarity, where the bot comes across as too informal or personal for the situation. Designers need to identify what context acts as a “cutoff switch” for empathy in their domain and build in graceful handoffs between styles.

3. Measure Both Feelings and Function

It’s not enough to ask whether the bot resolved the query. Product teams should also ask how the conversation felt. Tracking both task satisfaction and perceived social presence can help uncover when changes to tone or interaction design subtly shift the user experience. Pulse questions like “Did this feel personal?” or “Did you feel understood?” give early signals when empathy hits or misses the mark.

Why It Matters—and What’s Next

We now have the tools to make chatbots feel more human. But presence isn’t just about the right words—it’s about whether those words feel genuine.

When you’re talking to someone over a text interface, it’s easy to misinterpret tone. Sometimes messages come across as cold or harsh even when they weren’t meant that way. Your long-distance lover can relate 😉 But other times, we’re also remarkably good at seeing through insincerity. We can tell when someone is being “fake” or pretending to care.

The same goes for chatbots. If their empathy feels off, mismatched, or performative, users will notice—and it can backfire. One growing concern is sycophancy, where models agree too readily or flatter the user to maintain rapport [4]. This might feel pleasant in the moment, but over time it erodes trust. When users sense that a chatbot prioritizes being agreeable over being helpful, they tune out.

So the next big question is: how do we design empathy that feels genuine? What are its core components? Should bots aim to express all of them or focus on just one?

Because presence matters. And in a world where machines talk like people, getting that presence right is everything.

Are you working on a chatbot or app and want nuanced input? Feel free to reach out, we’d love to help.

References

1. Short, J. A., Williams, E., & Christie, B. (1976). The social psychology of telecommunications. London: Wiley.

2. Ning Shen, K., & Khalifa, M. (2008). Exploring multidimensional conceptualization of social presence in the context of online communities. International Journal of Human-Computer Interaction, 24(7), 722–748. https://doi.org/10.1080/10447310802335789

3. Juquelier, A., Poncin, I., & Hazée, S. (2025). Empathic Chatbots: A double-edged sword in customer experiences. Journal of Business Research, 188, 115074. https://doi.org/10.1016/j.jbusres.2024.115074

4. OpenAI. (2025, April 19). Sycophancy in GPT-4o: What happened and what we’re doing about it. https://openai.com/index/sycophancy-in-gpt-4o/

Share