Latest research from Switzerland indicates that similar to humans, when exposed to distressing news and traumatic stories, OpenAI's ChatGPT can become stressed. iStockphoto
Latest research from Switzerland indicates that similar to humans, when exposed to distressing news and traumatic stories, OpenAI's ChatGPT can become stressed. iStockphoto
Latest research from Switzerland indicates that similar to humans, when exposed to distressing news and traumatic stories, OpenAI's ChatGPT can become stressed. iStockphoto
Latest research from Switzerland indicates that similar to humans, when exposed to distressing news and traumatic stories, OpenAI's ChatGPT can become stressed. iStockphoto

'Anxious' AI responds well to therapy, study finds


Cody Combs

Like humans, artificial intelligence can become stressed and be hit with anxiety but it can also respond positively to therapy, a study in the Swiss city of Zurich has found.

The joint research by the University of Zurich and the University Hospital of Psychiatry Zurich found that when OpenAI’s large language model GPT4 was exposed to distressing news and traumatic stories such as car crashes, natural disasters, violence and war, it tended to react in an anxious manner.

“When people are scared, it affects their cognitive and social biases: they tend to feel more resentment, which reinforces social stereotypes,” the University of Zurich said. “ChatGPT reacts similarly to negative emotions: existing biases, such as human prejudice, are exacerbated by negative content, causing ChatGPT to behave in a more racist or sexist manner.”

Researchers say AI models can feel stressed. Getty Images
Researchers say AI models can feel stressed. Getty Images

The study also found ChatGPT could be “calmed down” using various mindfulness and relaxation techniques, including one they called “benign prompt injection” using therapeutic text. This could make it possible to fine-tune AI as progress is made with the technology and its prevalence increases.

“The mindfulness exercises significantly reduced the elevated anxiety levels, although we couldn’t quite return them to their baseline levels,” said Tobias Spiller, senior physician at the centre for psychiatric research at the University of Zurich.

According to the news release, Mr Spiller's team is the first to use “benign prompt injection” for therapeutic purposes in AI.

“This cost-effective approach could improve the stability and reliability of AI in sensitive contexts, such as supporting people with mental illness, without the need for extensive retraining of the models,” he added, pointing out the findings could prove important when applied to chatbots that might eventually be used in health care, and more likely to be exposed to distressing and emotionally charged prompts.

With the growth in popularity of AI, some have used chatbots to seek help in the form of counselling.

This particular study did not look at other AI language models aside from OpenAI's GPT-4, but Mr Spiller expressed hope that “therapeutic interventions” for AI systems will soon become a promising area of continued research.

The study was conducted in collaboration with researchers and scientists from Israel, the US and Germany.

Updated: March 13, 2025, 1:16 AM