
Does AI really recognize our emotions?
09.07.2025
A recent study conducted by Zaira Romeo from the CNR and Alberto Testolin from the University of Padua, published in "Royal Society Open Science", reveals that generative AI models can emulate human emotional reactions to visual scenes. Despite not being specifically trained for this task, the AI systems have demonstrated a surprising alignment with human emotional assessments. Artificial intelligence is beginning to recognise not only words but also our emotional responses. The textual descriptions of images used to train modern generative AI models contain information not only about the semantic content of the images but also about the emotional state of the people providing these descriptions.
This is the finding from the research conducted by the University of Padua team. Zaira Romeo from the CNR and Alberto Testolin from the University of Padua, in their study "Artificial Intelligence Can Emulate Human Normative Judgments on Emotional Visual Scenes", published in the journal "Royal Society Open Science", tested several large multimodal language models to see if they could emulate human emotional reactions to various visual scenes. The evaluations provided by the AI showed a remarkable alignment with human ones, even though the systems were not specifically trained for emotional judgments on images. This suggests that language can support the development of emotional concepts in AI systems, raising questions about how these technologies might be used in sensitive contexts such as elder care, education, and mental health support.
The authors examined the responses of generative AI systems like GPT, Gemini, and Claude to questions about the emotional content of standardised images, finding that AI evaluations closely matched human ones, with GPT providing the most aligned responses, though tending to overestimate human judgments for emotionally intense stimuli. Often, the AI explicitly stated that it was guessing the response based on what an average human would say. The AI systems were tested on three fundamental affective dimensions (pleasantness, approach/avoidance tendency, activation) and six basic emotions (happiness, anger, fear, sadness, disgust, surprise).
This is the first study where AI responses are explicitly compared with human emotional judgments, and the result undoubtedly opens a new perspective on the emotional capabilities of these systems. However, the authors clarify, the ability of AI to emulate human emotional judgments does not imply that it experiences emotions. It is likely that the textual descriptions of the images used to train the systems are so rich that they convey information about the emotional state of the human descriptor.
"Be careful though, the fact that AI can accurately emulate our emotional judgments does not imply that it has the ability to experience emotions," Zaira Romeo and Alberto Testolin emphasise. "The most plausible explanation is that the textual descriptions of the images used to train these systems are extremely rich and informative, to the point of conveying not only information about the semantic content of the image but also about the emotional state of the person providing the description. This hypothesis is well aligned with psychological theories that highlight the importance of language in shaping thought and structuring the world we inhabit, including the development of our emotions. At the same time, this research also raises important questions about how future AI technologies might be used in increasingly sensitive contexts such as elder care, education, and mental health support," they conclude. "In addition to understanding the emotional content of a situation, we must ensure that the behaviour adopted by AI in these contexts is always aligned with our ethical and moral value system."
Once again, new ethical and moral questions arise about how future AI technologies might be used in sensitive contexts, ensuring that their behaviour is always aligned with our values.