Tech giant Google plans to launch new features to support mental health within its Gemini chatbot, reflecting the growing reliance of users on such tools during sensitive moments.
New Mental Health Support Features in Gemini
The company revealed in its official blog that Gemini will introduce an interface guiding users to mental health support hotlines when signs of “suicidal thoughts or self-harm” are detected.
Google is also working on adding a section titled “Help is Available” within conversations related to mental health, along with updates aimed at reducing harmful behaviors.
Improving Responses to Mental Health Queries
These updates come amid the rapid rise of AI tools like Gemini and ChatGPT. Their role is no longer limited to providing information—they are increasingly handling more complex human contexts. Users now turn to these tools not only for searches but sometimes to express emotions or seek help. This step aims to improve how mental health-related queries are handled.
Focus on Safety and Guidance
The new updates are designed to make responses clearer in directing users to appropriate support, especially in cases that may indicate a mental health crisis. Responses have also been refined to be more context-sensitive, while emphasizing that these tools are not a substitute for professional medical or psychological support.
This approach reflects an effort to reduce potential risks, as inaccurate or overly simplified responses could lead to negative outcomes, particularly for vulnerable users.
Google confirmed that it has trained Gemini not to validate or reinforce false beliefs, and to distinguish between subjective experiences and objective facts in a “gentle” manner, without providing further details on how this is implemented.
Previously, the company introduced similar updates across its services by incorporating information from mental health organizations and experts into search results and its YouTube platform, in response to criticism regarding health-related content.

Post a Comment