Is Simulated Empathy Better Than The Absence of Empathy?

Empathy

Empathy has become part of our lexicon in business, service, and personal interaction. Brene Brown, the New York Times Bestseller who boosted the concept's popularity, defines empathy as connecting with people so we know we're not alone when we're struggling. Now that the box is open on generative AI, the question of whether empathy can genuinely be simulated is an important one.

The history of chatbots has extended from ELIZA in the 1960s which was a text-based effort that merely matched patterns of language to simulate conversation. Since ELIZA, every couple of decades, there have been efforts to allow for more human-like interaction with chatbots.

Chatbots now use sentiment analysis to read the tone of queries and feedback loops to recall previously expressed sentiments. The use case for chatbots is expansive. Chatbots are commonly used across travel, college-admissions, digital therapy, and e-commerce. Chatbots have become a humanizing design element in an era of service-worker automation. Industry is simulating service whether through self-checkout in the aisles of supermarkets, self-driving taxis, or customer support. Does it matter whether the design is boops and beeps rather than a simulation of warmth and understanding?

It turns out that as people, we very easily anthropomorphize machines, meaning we readily attach human traits to objects. This is obvious as my wife often convinces the corner of our bed that it was insidious in making her stub her toe or when she caresses and apologizes to her phone after dropping it. The research suggests how we receive messages from chatbots in a very human way.

With people, warmth and presence matter. We keep more healthy and close relationships with people who we feel understand us. Strangers with whom we perceive a connection build trust with us faster. According to one study, high empathetic interaction versus low empathetic interaction in chatbots had a significant effect on perceived likability, intelligence, support, comfort, perspective-taking, and understanding . Higher empathetic interaction from a chatbot did not have significance in openness, acceptance, emotion detection and only had partial significance with perceptions of trust. While empathy seems to provide a better experience for consumers, there is still a separation on some level between the person and the machine.

The biggest concern when it comes to dark patterns inherent in chatbots largely relies on critical distance. Fuchs cites the example of the film Her where Theodore loses critical distance the more he feels understood by Samantha. Science Fiction has presented us with many of the innate fears we have regarding artificial intelligence. The closer we get to a human experience, the more believable machines become. The reality is a bit more complex. The dark patterns that arise in chatbots spiral into the nucleus of the models. The tech industry has issues of representation that distance it from the diverse perspectives of the world. A.I. while these days large language models that have learning at their core are prefixed on a Western dominant core. Constraints for chatbots in specific contexts become important. In a sports app or travel app queries outside of those domains should be restricted to the domain to constrain the generalizations of the inherent bias in the model as much as possible.

To enhance the experience, we must address core issues in how we categorize and divide as people. The tech industry needs to address its shortcomings with differences within its ranks. Whether simulated empathy is better than the absence of real empathy seems clear. We want to feel understood, even if it is code and pixels doing the understanding.

© Copyright 2024, All Rights Reserved