
AI chatbots don’t interrupt and aren’t judgemental – so what can they teach us about deep listening?
“I am aware it’s a machine but it’s super convenient and knows how to listen well whenever I need it,” says Anna, a Ukrainian living in London. She is talking about her regular use of the premium version of ChatGPT, a chatbot powered by artificial intelligence.
What Anna – the BBC is not using her real name to protect her identity – finds particularly valuable isn’t necessarily the AI’s advice, but its ability to give her space for self-reflection.
“I have a history with it, so I can rely on it to always understand my issues and communicate with me in a way that suits me,” she says. She is aware that this might seem odd to many people, including her friends and family, which is why she has asked to remain anonymous.
But when she recently broke up with her boyfriend she found the AI’s patient listening offered something that her protective friends and family couldn’t provide with their immediate judgments about her ex-partner – “he’s an idiot”.
Instead, the absence of judgement created an opportunity for self-understanding as she unpacked her mixed emotions.
And Anna is not alone: recent Harvard Business Review research shows that in 2025 therapy and companionship was the single most common use of generative AI through the family of tools like ChatGPT, which can carry out a conversation much like a person.
Strikingly, studies show AI-generated text responses are now rated as more compassionate than those written by humans – even when those humans are trained responders from crisis hotlines. This isn’t because AI is genuinely more compassionate, but rather a sobering indictment of how rarely we listen in a non-judgmental way.
When researchers disclosed the identity of the response authors, evaluators still judged responses from ChatGPT responses to be more understanding, validating and caring – revealing how hungry people are for uninterrupted, non-defensive listening. In another study, people reported experiencing more hope, less distress and less discomfort after interacting with AI-generated responses compared to humans.
It is worth remembering that these AI chatbots are not displaying real empathy, but rather simulating it based on what they have learned from huge datasets of human interactions.
The irony that an algorithm powered by a large language model – the type of machine learning that underpins many AI chatbots – might be perceived as a better listener than an authentic human reveals important insights about our human listening shortcomings. It’s when our agendas, backstories and emotional triggers run the show, that true deep listening becomes thwarted.
None of this is to suggest we should trade real person relationships for large language models. But it does suggest there are some lessons that we humans can learn from these code-based listeners.
The power of uninterrupted attention
Perhaps the most fundamental lesson from AI is simply allowing others to speak without interruption. Humans interrupt for countless reasons: fear of an awkward silence, attempts to “help” find words, saving time with our “superior” responses or sub-consciously asserting dominance. These interruptions, however well-intended, rob speakers of their autonomy and opportunity to develop their thoughts. Interruptions during a phone conversation, for example, have been found to lessen perception of empathy in the person speaking.
Large language models don’t have motivations or desires. They are programmed to be compliant so that people will continue to use them. They therefore exhibit perpetual patience – never suffering from empathy fatigue. While such a feat is not something we humans can or should aspire to, holding back from interruptions can be powerful.
Pick up on emotions
Pioneering psychologist Carl Rogers understood that acknowledging emotions is essential to effective listening. Large language models are programmed to categorise emotions and reflect these back in what appears to be an empathetic way, according to Anat Perry, an empathy researcher at Hebrew University in Jerusalem in Israel.
AI systems show particular advantage in responding to scenarios involving suffering and sadness compared to positive emotions
One experiment found that Bing Chat – the forerunner to Microsoft’s Copilot – was more accurate than human responders in detecting happiness, sadness, fear and disgust. It was comparable to humans in detecting anger and surprise. While large language models can’t actually feel these emotions, they can recognise and reflect back these sentiments, so the speaker feels heard. Researchers have found that AI platforms that reflect emotional complexity in their responses can help to reframe users’ thinking and build psychological resilience.
Holding space for difficult emotions
Humans instinctively avoid acknowledging difficult emotions, both our own and others.
So, for example, when our cousin tells us about the tragic death of his cat, we jump in to reassure with comments such as: “Luna had a long happy life and was well loved till the end.” But this fails to acknowledge our cousin’s feelings of distress. AI systems show particular advantage in responding to scenarios involving suffering and sadness compared to positive emotions. People often fear burdening human listeners with their worries, explains Dariya Ovsyannikova, a cognitive health researcher at the University of Toronto, Canada, who has studied how people perceive AI as compassionate.
AI offers a burden-free alternative. Giving someone the space to share tough emotions can allow a speaker to feel it’s safe to have difficult thoughts and thus more likely to be able to move beyond them.
Non-judgemental presence
Our survival as a species has historically depended on making quick judgments – distinguishing friend from foe is an evolutionary imperative. But these judgments, often unconsciously conveyed through subtle expressions like a momentary frown, can be devastating for someone sharing vulnerable thoughts. This has been found to be especially true among young children, for example. In contrast, AI seems to offer users anonymity and freedom from social judgement, creating psychological safety that enables open sharing.
For human listeners, this highlights how critical it is to recognise when you are making judgements and consciously set them aside so your speaker can feel open to share more freely.
Pattern recognition
Because of everything we are juggling, a non-professional listener isn’t focussed on recalling different types of anxiety that someone has told us about, for example, or the multiple feelings they’ve expressed about their mother. AI algorithms excel at pattern recognition, drawing upon a vast array of data – including incoherent thoughts – to pick up the slimmest threads and weave them into a meaning-rich tapestry.
When we listen as a human, we too can choose to take a step back and reflect back to the speaker not every instance of repetitive emotions, but an overall sense of what they feel about an issue and even their feelings about holding these emotions. These patterns can be like a gift, if they offer us the opportunity to draw meaning from them or see our story in a new light. Narrative is a crucial way in which humans make sense of the world.
Resisting the urge to fix
Many of us, particularly in leadership or parental roles, believe our value lies in sharing the pearls of our wisdom and offering helpful advice. And men are more likely than women to jump in unsolicited to provide solutions to fix someone else’s problems. Yet in studies, AI’s restraint from offering practical suggestions in favour of emotional support makes people feel heard more effectively – something humans can consciously choose to do.
Avoiding the “me too” trap
When someone shares their challenging experience – a miscarriage, an impossible boss, a leak in their roof – we so often respond with our own similar story. We might feel it conveys a sense that we know how they feel and that it can help to build a connection with the other person. But in doing so we are turning the spotlight away from them and onto us. When we start to tell our story, we stop listening to theirs.
A large language model can not fall into this trap because it has no experiences. Humans can, which is why we can choose to be more intentional about keeping the spotlight on the speaker, not reverting to our own story.
The limitations of algorithmic empathy
Despite these advantages, there are also a multitude of dangers of over-reliance on AI as a listening tool. As technology advances towards human-like avatars who look, sound and feel like our fantasy listener – even conveying tactile responses – both potential benefits and dangers increase.
