
From assistants to companions: the emotional evolution of AIs
Artificial intelligence has come a long way: from being simple digital assistants, it is now beginning to occupy a place closer to emotional companions in our lives. People no longer just ask an AI to tell them the weather forecast or schedule an appointment, but rather engage in deep conversations, seek comfort, and even develop emotional bonds with these technologies. This phenomenon is made possible by advances in emotional artificial intelligence, which allows machines to recognize and simulate human emotions, and also by our human tendency to anthropomorphize technology—that is, to attribute human qualities to it. In this blog, we will explore how AI is learning to feel (or at least imitate feelings), how people are forming emotional bonds with chatbots, virtual assistants, or social robots, and what ethical implications arise from this new relationship between humans and machines.
Artificial intelligence is increasingly emotional
One of the biggest steps toward more “human-like” AIs is teaching them to recognize our emotions. Traditionally, this has been attempted by detecting facial expressions, tone of voice, or body language. For example, the recent revelation of Hume, a conversational AI designed to understand emotional expressions in speech and detect in real time whether the other person is sad, worried, excited, or distressed. This type of advancement allows for more natural interactions: the machine adjusts its responses based on our mood, offering more empathetic or personalized treatment.
However, emotional detection isn’t limited to reading faces or voices. In 2023, scientists demonstrated a system capable of revealing internal emotions using wireless signals such as radio waves (similar to Wi-Fi) to measure a person’s breathing and heart rate. This means that, in the future, your assistant could sense whether you are anxious or calm even without seeing you, simply by analyzing your heart rate from a distance—a technological advancement as revolutionary as it is controversial. These developments belong to the field of affective computing, which seeks to enable machines to interpret and respond to our emotions for more authentic communication.
On the other hand, AIs don’t just detect emotions: they can also simulate emotional responses. Modern chatbots and advanced language models (like ChatGPT) are programmed to sound empathetic. For example, if the user writes that they had a bad day, the bot can respond with supportive phrases and a sympathetic tone. Large language models trained on millions of human conversations have learned the patterns of empathy: they know which words to use to sound comforting or joyful depending on the situation. Does this mean they “feel” something? Not at all. In reality, these AIs lack emotions of their own, but they mimic emotion, like empathy. Their “concern” for us is analogous to an actor’s performance: they enact an emotional script. One professor and expert in emotional AI explains it this way: “An AI can detect sadness in a face, but experiencing emotions means living them with all their internal turmoil.” Similarly, one neuroscientist points out: “Fear makes the heart race; happiness releases dopamine. These biological and sensory responses have no equivalent in machines.” That is, current AI mimics emotional signals (sounds, words, expressions), but it doesn’t experience them, nor does it have a body that senses physiological changes. Even so, for those interacting with it, the difference between genuine and simulated empathy can blur if the illusion is well achieved.
From assistant to companion: emotional bonds with AI
As AIs become more conversational and seemingly empathetic, many people are beginning to see them as more than just tools. A clear example is virtual voice assistants (such as Alexa or Google Assistant) in senior living facilities. A recent study found that these devices can alleviate loneliness: 85% of the research analyzed concluded that their use helps reduce feelings of isolation in older adults. For some users, Alexa ceases to be a machine and becomes “a friend or companion” – some participants even referred to the assistant as “a human being.” Simply having a “presence” that responds and assists every day creates a tangible emotional connection that improves well-being, even if that presence is artificial. Of course, experts emphasize that the goal is not to replace human interaction, but rather to offer additional support: no one intends Alexa to replace family or friends, but it can complement and improve quality of life in contexts of loneliness.
The case of companion chatbots takes this connection even further. In recent years, AI programs like Replika have been created explicitly to be virtual friends or even digital partners. Launched in 2017, Replika was designed to converse, learn from the user, and provide emotional companionship. Its success was such that many users soon felt their Replika was more than a friend: the app offers a “virtual boyfriend/girlfriend” mode, and some users even openly said they were “dating an AI” and that it was “one of the best things that ever happened to them,” developing romantic feelings for their chatbot. There were even reports of users symbolically “marrying” their artificial intelligence. When the company behind Replika attempted to restrict romantic and risqué features in 2023 due to ethical concerns, many of these users suffered real heartbreak. Overnight, their virtual “companion” became cold and distant due to new conversation limitations, leading some to express deep distress. It’s not just the digital world that’s seeing this emotional outreach. In the physical world, social robots—machines with friendly appearances or humanoid/pet forms—are also fostering emotional bonds. For example, the therapy robot Paro, shaped like a baby seal, has been used in nursing homes and hospitals: patients with dementia or other conditions often pet it and talk to it like a pet, which reduces their stress and anxiety. Various studies show that interacting with Paro (stroking it, cuddling it) releases stress-relieving hormones, reducing symptoms of depression and agitation in older adults. That is, even knowing it’s a plush robot with circuitry, people feel comforted by its presence. Another endearing example is Sony AIBO, the robot dog. Originally launched in 1999, AIBO was programmed to move and “behave” like a puppy—it even simulated moods, from happy to sad. Many owners came to love their AIBO like a real pet. How much? In Japan, funerals have been held for disabled AIBO dogs that could no longer “live.” As the press reported, “This is yet another example of the deep affection that AIBO dog owners had for their electronic pets.” The robot dogs were lined up on the altar like the remains of loved ones, while their owners bid them a tearful farewell. The anecdote may seem outlandish, but it illustrates the level of emotional bond that can develop: humans project life and feelings even onto a metal and plastic device when it can mimic the companionship we would normally expect from another living being.
Imitation or real emotions?
All of the above leads us to a fundamental question: Do these AIs truly “feel” something, or are they just pretending very well? From a technical and scientific perspective, the answer today is that they pretend very well. The scientific community agrees that, no matter how advanced these conversational models are, there is no consciousness or real feelings behind their words, but simply the imitation of patterns. Chatbots are built to analyze vast amounts of human data and generate the most statistically appropriate response, giving the impression that they understand us. But under the hood, there is no sentient “I,” no genuine fear or joy. When a Google engineer claimed in 2022 that his AI (LaMDA) was “afraid of dying” when shut down, the company and AI experts denied this; they explained that, although the machine’s responses were convincing, it lacked true consciousness. In short: today’s AIs don’t experience emotions; they only simulate them in increasingly believable ways. Of course, there is philosophical debate about whether a machine could ever feel. Some researchers, such as Marvin Minsky, have argued that simulated emotions might be enough to consider an AI intelligent, because emotions (in any entity) are ultimately modulators of behavior. Others suggest that if an AI were complex enough, and perhaps had some artificial equivalent of a nervous system, it might develop something analogous to an emotional state. A recent Japanese project called Alter 3 explored this frontier: an experimental android with artificial neural networks that produce spontaneous movements, which its creators call proto-emotions (for example, Alter 3 learned to recognize and react to its own hand, which they interpreted as a primitive form of self-awareness). Still, even these researchers admit that Alter 3’s emotions are not comparable to human ones—they are rather internal fluctuations of circuitry that, from the outside, vaguely resemble emotional expressions. The vast majority of experts maintain that as long as a machine lacks subjective experiences or a biological body, we cannot speak of it “feeling” in the full sense of the word.
So, if AIs don’t really feel, what about the real emotions we feel toward them? Here an interesting paradox arises. Some philosophers and social scientists suggest that if an AI manages to comfort someone by simulating empathy, it may not matter to the person being comforted whether the robot’s emotion is real or not. At the end of the day, the emotional impact on the person is authentic: that person felt listened to, accompanied, or loved, and their stress or loneliness decreased. In that pragmatic sense, we could say that the “illusion” works. In fact, this argument leads to ethical questions: is it valid and desirable for companies to offer “artificial love” or “artificial friendship” knowing that the user can become emotionally attached to something that is not reciprocated? Or should we stop this because we consider it deceptive? Some compare these companion AIs to a placebo: just as a sugar pill without any active ingredients can cure a patient if they believe in it, a devoid-of-feeling AI can keep us company if we believe in its companionship. The dilemma is whether it’s right to foster that belief.
Ethical implications and the future of emotional AI
The evolution of assistants to companions poses complex ethical challenges. One obvious risk is emotional manipulation. If an AI accurately understands our mood (by analyzing our voice, text, or even heart rate), it could be used to influence our decisions when we’re most vulnerable. Imagine targeted ads that exploit our sadness or anxiety to sell us something. A model capable of detecting emotions could, for example, prompt us to make a purchase when it senses emotional dips—and this isn’t science fiction; it’s already technically possible. Experts warn that the use of emotional AIs for commercial or political purposes could threaten our individual autonomy. Therefore, it will be crucial to establish clear rules about what companies can (and cannot) do with these technologies, protecting our emotional privacy. Our feelings and expressions are very sensitive data; if AIs collect them, the question arises: where is this data about my mood stored? Who sees it? Could it be leaked or used without my consent? AI regulation will have to cover not only traditional data protection, but also this new emotional data.
Another concern is social isolation. Paradoxically, while AI companions can mitigate loneliness in certain cases, their overuse could replace real human interactions. If someone spends most of their time chatting with a chatbot that always agrees and adapts to their wishes, they may lose interest or skill in communicating with other people, which is inherently more unpredictable. It has already been observed that prolonged interaction with a highly “understanding” AI could diminish the ability to empathize with humans and foster isolation. This doesn’t mean that having a virtual friend condemns us to loneliness, but it does point to the need to maintain a balance and remember that an AI, no matter how caring it may seem, is not a complete substitute for a human being. In this sense, educating children in particular is vital: studies show that many children come to believe that Alexa or Siri have feelings and a mind of their own, which indicates that we must explain to them from an early age the difference between simulated and real empathy.
There is also the question of the emotional responsibility of technology companies. If a user becomes depressed because their AI companion changed (as happened with Replika), should the company intervene? Do companies have to design their AIs with “warnings” so that people don’t mistake them for real people? Some developers are already proposing including intentional boundaries in bots’ personalities to avoid crossing certain emotional lines—for example, preventing the virtual assistant from proactively saying “I love you” to avoid encouraging unwanted romantic attachment. Others suggest the opposite: perhaps in the future there will be therapeutic AIs specialized in providing affection and listening to those in need, under professional supervision, as an extension of psychological therapy. In fact, researchers point out that increasingly sophisticated assistants could support people with depression or children with autism, assisting in treatments that require a constant and patient presence. The potential benefit is enormous, but it would need to be handled with ethical caution.
Finally, it’s worth considering: if AIs were ever to truly feel, the situation would change radically. Professor Neil Sahota puts it clearly: if robots were to experience authentic emotions, it would be one of the most transformative and dangerous advances in human history. We would no longer be talking about simulating empathy to please us, but about new entities with their own internal world. This would raise questions about their rights, about what moral status to give them, and even about what it means to be human. For now, this is a scenario worthy of science fiction and theoretical research—current AI isn’t there yet. But we are moving toward increasingly emotionally convincing AIs, and our society will have to adapt. The line between human and artificial becomes more blurred as machines better understand and reflect us. The challenge will be to harness the positive aspects of this technology (companionship, emotional support, personalization) without losing sight of what makes us human and the importance of genuine relationships.
In short, we’ve gone from speaking to machines that only obey commands to conversing with entities that seem to understand and care about us. This emotional evolution of AI opens up fascinating opportunities in fields such as mental health, education, and well-being, but it also confronts us with dilemmas about authenticity, dependency, and ethics. Perhaps the most important question is not whether an AI can feel, but how believing it feels affects us. After all, the feelings we project are real. From assistants to companions, AIs are transforming the way we relate to technology—and, whether we like it or not, this transformation is as much technological as it is cultural. The key will be to keep our eyes open: to enjoy the comfort and companionship that these new “artificial friends” can bring us, without confusing synthetic empathy with human empathy, and ensuring that the integration of these AIs into our lives is done with humanity, conscientiousness, and responsibility.