Can an algorithm be empathetic? – What can you—and can’t you—expect from artificial intelligence?

Olvasási idő: 7 perc

“I understand that this might be difficult for you.” These days, a statement like this might not only come from a friend or a psychologist, but also appear in a chatbot’s response. In recent years, artificial intelligence has become increasingly prevalent in our daily lives: customer service chatbots answer our questions, digital assistants help us with our work, and more and more apps are trying to respond sensitively to users’ emotional states. But does artificial intelligence really understand us, or does it just mimic empathy extremely effectively?

What does empathy mean—and what does AI know about it?

To understand this, it’s worth first clarifying what empathy actually means. Psychology distinguishes between two main components. Cognitive empathy is the ability to recognize and interpret others’ emotional states: we understand what the other person is feeling and why. Affective empathy goes a step further: in this case, we not only recognize emotions but also experience them to some extent ourselves. A friend’s sadness evokes sadness in us, while joyful news lifts our spirits as well. Today’s artificial intelligences are capable of modeling at most the first level of empathy, while the ability to truly experience emotions remains uniquely human.

How does AI understand emotions?

The key to artificial intelligence’s ability to act with empathy lies in affective computing, which enables systems to recognize and interpret emotional cues and then provide appropriate responses. When a chatbot responds to a stressful situation in a reassuring tone, it is actually selecting the appropriate style and content based on learned patterns. Developers are increasingly building systems that not only convey information but also respond to the user’s emotional state. Research also indicates that people tend to relate to machines as social beings. Interacting with a well-designed algorithm activates the same psychological mechanisms as in a human relationship. It follows that artificial intelligence that appears empathetic is not merely a technological phenomenon, but also a psychological one. However, behind these empathetic responses lie not genuine compassion but algorithm-generated reactions; yet from the perspective of human perception, this distinction is often blurred.


AI as a tool for self-awareness

However, this observation does not mean that AI is not useful—in fact, one of its most interesting areas of application is self-improvement. Modern language models—including conversation-based systems—are increasingly being used as tools that support self-reflection and the development of emotional awareness. By being able to identify and articulate emotions, they can help users more accurately name their own internal states. This process in itself fosters emotional awareness, as it encourages us to make our inner experiences explicit.

Self-reflection is further enhanced by the fact that artificial intelligence can guide the thought process through structured questions and feedback, which can help identify recurring negative thoughts, overgeneralizations, beliefs, or distortions, and offer alternative phrasing. This does not replace therapy, but it is similar to certain cognitive behavioral therapy techniques aimed at making thinking more conscious and reframing it. One form of this, for example, is AI-assisted digital journaling, which not only records thoughts but also provides feedback on them, allowing the user to discover patterns in their own behavior over time.

In the field of emotional processing, these technologies can also play a complementary role, even within a therapeutic process. Although helping professionals remain indispensable in complex emotional situations, artificial intelligence may also be able to provide support in certain contexts. Several studies have demonstrated that interacting with such systems can improve users’ emotional state, partly because continuous feedback and non-judgmental communication create a safe space for expressing thoughts. Furthermore, the machine, as a conversation partner, contributes to emotional processing in a unique way: its presence is constant and immediately available, its reactions are quick, and it does not burden the user with social expectations.


Communication and conflict management

However, the role of artificial intelligence is not limited to supporting self-awareness processes; it can also play a role in improving communication. For example, it can help formulate a difficult message, reflect on a conflict situation, or examine a problem from multiple perspectives. This can be particularly useful when someone is unsure how to express themselves or wants to avoid misunderstandings. AI can rephrase sentences in a more empathetic and constructive way, which can contribute to more mindful and effective communication. In certain situations, it can also act as a mediator: it compares differing viewpoints, highlights common ground, and makes suggestions aimed at reaching a consensus.

AI in the field of mental health

However, all these potential applications become particularly sensitive issues when artificial intelligence enters the realm of mental health and therapeutic practice. The integration of AI in this field raises not only technological challenges but also serious ethical and professional dilemmas. One of the most important issues is data protection and the handling of confidential information. Healthcare applications often process highly sensitive personal data, which raises the issue of how this information is stored and used. At the same time, the workings of complex algorithms are often opaque, so it is not certain that users fully understand the system they are interacting with or the potential consequences of sharing their data.

A significant difference can also be observed in terms of the therapeutic relationship: one of the cornerstones of psychotherapy is the trusting relationship between therapist and client, which is built on the subtle dynamics of mutual understanding and emotional presence. Artificial intelligence, by contrast, is unable to form genuine connections, which can undermine the effectiveness of the therapeutic process in the long run. Although some users may develop an attachment to such systems, this attachment is one-sided, and there is a risk that it will begin to take the place of real human relationships.

Risks and dilemmas

Another problem is the misinterpretation of emotions. Even the most advanced systems can make mistakes in identifying emotional cues, which can lead to inappropriate or even harmful responses. This can be particularly critical in vulnerable situations, where an inaccurate reaction may not only fail to help but could actually exacerbate the user’s negative state. In this context, there is also a risk that therapeutic care will increasingly rely on algorithmic logic, which in the long term could lead to the depersonalization or alienation of mental health care, pushing the importance of human relationships in the helping process into the background.

In light of all this, it is worth viewing the empathetic capabilities of artificial intelligence as a tool rather than a substitute. It can support self-reflection, help articulate and structure emotions, and contribute to more mindful communication and conflict resolution. It is important, however, that we use such tools as supplementary support and not as a substitute for human assistance, since artificial intelligence lacks the deeper human qualities that make empathy a genuine connection. The algorithm is therefore not empathetic in the strict sense of the word, but it is capable of creating interactions that evoke the experience of empathy—and this duality determines how we should approach it in our daily lives.

Bibliography

Alfraih, S. S. (2025). The Ethics of AI in Mental Health: A Psychological Examination of Digital Therapeutic Interventions. Studies in Systems, Decision and Control. 587. 2899–2910. https://doi.org/10.1007/978-3-031-87584-7_212

Brailas, A., & Tsolakis, L. (2025). Questions People Ask ChatGPT Regarding Their Romantic Relationships and What They Think About the Provided Answers: An Exploratory Study. LNCS. 15545. 150–158. https://doi.org/10.1007/978-3-031-88045-2_10

Carneiro, L., & Gomes, A. (2025). Applications of artificial intelligence use in therapeutic interventions: A multidisciplinary approach. AI in Mental Health: Innovations, Challenges, and Collaborative Pathways. 167–211. https://doi.org/10.4018/979-8-3373-5072-1.ch008

Gadiraju, R., Kavadikijanekunte, A., & Karnala, T. K. (2025). The Moral Algorithm Ethics in Al-Supported Mental Health. Wearable AI in Psychotherapy. 183–212. https://doi.org/10.4018/979-8-3373-0467-0.ch007

Giotakos, O. (2025). Artificial intelligence-based psychotherapy: Focusing on common psychotherapeutic factors. Frontiers in Psychiatry. 16. https://doi.org/10.3389/fpsyt.2025.1710715

Gold, N. S. S., & Kanna, S. Y. (2026). ARTIFICIAL INTELLIGENCE: A Replacement or a Complement to Psychotherapy in Neurocognitive Rehabilitation? The Routledge International Handbook of Neurocognitive Rehabilitation: Practices, Innovations, and Future Directions. 103–118. https://doi.org/10.4324/9781003646662-8

Howcroft, A., & Blake, H. (2025). Empathy by Design: Reframing the Empathy Gap Between AI and Humans in Mental Health Chatbots. Information (Switzerland). 16(12). https://doi.org/10.3390/info16121074

Mandal, S., & Hawamdeh, M. M. K. (2025). Digital well-being and AI: Navigating the intersection between technology and mental health. Digital Citizenship and the Future of AI Engagement, Ethics, and Privacy. 111–132. https://doi.org/10.4018/979-8-3693-9015-3.ch004

Perlis, R. H. (2026). Artificial Intelligence and the Potential Transformation of Mental Health. JAMA Psychiatry. 83(4). https://doi.org/10.1001/jamapsychiatry.2025.4116

Rao, T. V. N., Deepika, J. V. P. U., Uppala, V., & Swetha, C. (2025). The Rise of Artificial Empathy: How Machines Are Reshaping Human Conversation. Advancements in Speech Processing for Human-Computer Interaction. 25–53. https://doi.org/10.4018/979-8-3373-3048-8.ch002

Raygoza-L., M. E., Orduño-Osuna, J. H., Jimenez-Sanchez, R., & Murrieta-Rico, F. N. (2025). Innovative Artificial Intelligence approaches for identifying and managing DSM Cluster B personality disorders in mental health: A case study on the dark triad. Exploring Psychology, Social Innovation and Advanced Applications of Machine Learning. 1–20. https://doi.org/10.4018/979-8-3693-6910-4.ch001

Ruan, Q.-N., Hu, S.-Q., ShangGuan, Z.-H., & Zhou, S.-M. (2026). The augmented clinician as a framework for human-AI collaboration in mental healthcare. Frontiers in Psychiatry. 17. https://doi.org/10.3389/fpsyt.2026.1729175

Segal, M. (2025). Confronting and managing ethical dilemmas in social work using ChatGPT. European Journal of Social Work. 28(1). 155–167. https://doi.org/10.1080/13691457.2024.2377786

Segal, M. (2026). Social workers’ evaluation of ChatGPT for solving ethical dilemmas within the limits of confidentiality. Journal of Social Work Practice. 40(1). 5–18. https://doi.org/10.1080/02650533.2025.2480092

Suresh, R. V., Balamurugan, S., Karthick, R., & Senthilkumar, S. (2025). Artificial Intelligence with Emotional Intelligence via Hume AI Techniques. 1363–1368. https://doi.org/10.1109/ICSCDS65426.2025.11166916

Teixeira, R. (2026). Emotional AI Applied to Mental Health: An Ethical and Philosophical Analysis. LNCS. 16108. 421–430. https://doi.org/10.1007/978-3-032-04999-5_25

Varma, A., Saraiya, A. S., Maheshwari, E., Tuli, K., Srivastava, S., & Suresh, S. (2026). Advancements in Psychotherapy and Treatment: The Use of AI Interventions for Psychopathologies. AI-Driven Insights Into the Depths of Psychopathologies. 211–260. https://doi.org/10.4018/979-8-3373-1325-2.ch007

Villacís-Guerrero, J. D. P., Chancusig-Espín, W., Hurtado-Caina, J. S., & Chiza, J. C. (2026). Risks and Ethical Challenges of Emotional Intelligence in Conversational Agents. 149–161. https://doi.org/10.1007/978-3-032-10310-9_10

Weng, Z., Huang, Y., & Weng, S. (2026). From Code to Care: How Artificial Empathy Enhances Customer Experience in Human-Robot Interaction. Journal of Business Research. 206. https://doi.org/10.1016/j.jbusres.2026.115969

Yankouskaya, A., Liebherr, M., & Ali, R. (2025). Can ChatGPT Be Addictive? A Call to Examine the Shift from Support to Dependence in AI Conversational Large Language Models. Human-Centric Intelligent Systems. 5(1). 77–89. https://doi.org/10.1007/s44230-025-00090-w

Zhang, Y., Fang, J., Luo, X., Lindsay, D., Madre, N., Paredes, J., Penna, A., Melley, E., & Garcia, T. (2025). Exploring the efficacy of ChatGPT in understanding and identifying intimate partner violence. Family Relations. 74(3). 1233–1249. https://doi.org/10.1111/fare.13176 

Leave a Reply

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑