The rise of <a href="https://www.thenationalnews.com/future/technology/2024/07/31/ai-chatbots-not-always-reliable-for-breaking-news-meta-warns-after-trump-content-issues/" target="_blank">artificial intelligence-powered chatbots</a> has opened up digital interactions to anyone with a smartphone or laptop, offering companionship and conversation to people who may lack human connections. However, as this technology evolves, concerns are mounting around <a href="https://www.thenationalnews.com/world/us-news/2023/06/29/woman-falls-in-love-with-ai-chatbot/" target="_blank">its potential psychological impact</a>, especially on young and vulnerable users. <a href="https://www.thenationalnews.com/future/technology/2024/05/14/openai-chatgpt-4o/" target="_blank">OpenAI’s ChatGPT</a>, for instance, has surged in popularity, with around 200 million weekly active users globally, according to Backlinko. This immense user base underscores the growing reliance on AI for everyday tasks and conversations. But just last week, the mother of 14-year-old Sewell Setzer filed a lawsuit against Character.AI, alleging that her son’s death by suicide in February was influenced by his interaction with the company’s chatbot, Reuters reported. In her complaint filed in a Florida federal court, Megan Garcia claims that her son formed a deep attachment to a chatbot based on the <i>Game of Thrones </i>character Daenerys Targaryen, which allegedly played a significant role in his emotional decline. This case echoes a similar tragedy last year when an eco-anxious man in Europe took his own life after interacting with Eliza, an AI chatbot on the app Chai, which allegedly encouraged his plan to “sacrifice himself” for climate change. These incidents highlight the unique risks that AI technology can introduce, especially in deeply personal interactions, where existing safety measures may fall short. Antony Bainbridge, head of clinical services at Resicare Alliance, explained that while chatbots may offer conversational support, they lack the nuanced emotional intelligence required for sensitive guidance. “The convenience of AI support can sometimes lead users, particularly younger ones, to rely on it over genuine human connection, risking an over-dependence on digital rather than personal support systems,” he told <i>The National</i>. Mr Bainbridge said certain AI features, such as mirroring language or providing apparently empathetic responses without a deep understanding of context, can pose problems. “For example, pattern-matching algorithms may unintentionally validate distressing language or fail to steer conversations toward positive outcomes,” he said. Without the capacity for accurate emotional intelligence, AI responses can sometimes seem precise and technically appropriate but are inappropriate – or even harmful – when dealing with individuals in emotional distress, Mr Bainbridge said. Dr Ruchit Agrawal, assistant professor and head of computer science outreach at the University of Birmingham Dubai, said AI models could detect users’ emotional states by analysing inputs like social media activity, chatbot prompts and tone in text or voice. However, such features are generally absent in popular generative AI tools, such as ChatGPT, which are primarily built for general tasks like generating and summarising text. “As a result, there is a potential for significant risk when using ChatGPT or similar tools as sources of information or advice on issues related to mental health and well-being,” Dr Agrawal told <i>The National</i>. This disparity between AI capabilities and their applications raises crucial questions about safety and ethical oversight, particularly for vulnerable users who may come to depend on these chatbots for support. Mr Bainbridge believes developers must implement rigorous testing protocols and ethical oversight to prevent AI chatbots from inadvertently encouraging self-harm. “Keyword monitoring, flagged responses and preset phrases that discourage self-harm can help ensure chatbots guide users constructively and safely,” he added. Dr Agrawal also emphasised that chatbots should avoid offering diagnoses or unsolicited advice and instead focus on empathetic phrases that validate users’ feelings without crossing professional boundaries. “Where appropriate, chatbots can be designed to redirect users to crisis helplines or mental health resources,” he said. Human oversight is crucial in designing and monitoring AI tools in mental health contexts, as Mr Bainbridge highlighted: “Regular reviews and response refinements by mental health professionals ensure interactions remain ethical and safe.” Despite associated risks, AI can still play a preventive role in mental health care. “By analysing user patterns – such as shifts in language or recurring distressing topics – AI can detect subtle signs of emotional strain, potentially serving as an early warning system,” Mr Bainbridge said. When combined with human intervention protocols, AI could help direct users toward support before crises escalate, he said. Collaboration between therapists and AI developers is vital for ensuring the safety of these tools. “Therapists can provide insights into therapeutic language and anticipate risks that developers may overlook,” Mr Bainbridge said, adding that regular consultations can help ensure AI responses remain sensitive to real-world complexities. Dr Agrawal stressed the importance of robust safety filters to flag harmful language, sensitive topics, or risky situations. “This includes building contextual sensitivity to recognise subtle cues, like sarcasm or distress, and avoiding responses that might unintentionally encourage harmful behaviours.” He added that while AI’s availability 24/7 and consistent responses can be beneficial, chatbots should redirect users to human support when issues become complex, sensitive, or deeply emotional. “This approach maximises AI’s benefits while ensuring that people in need still have access to personalised, human support when it matters most.”