The deaths of two B.Com students in Surat have raised troubling questions after investigators found that the two friends had searched an Artificial Intelligence platform for information on suicide before they were found dead inside a temple complex bathroom. The case drew a swift reaction from Elon Musk, businessman and entrepreneur known for his leadership of Tesla, after reports revealed the two had used ChatGPT to search for ways to end their lives.
The chilling details
Childhood friends Roshni Sirsath, 18, and Josna Chaudhary, 20, left their homes on the morning of March 6. They told their families they were headed to college. They never came back. By afternoon, when phone calls went unanswered, their families contacted the police.
Officers traced the girls’ active mobile phones to the Atmiya Sanskar Dham Swaminarayan Temple complex on the outskirts of the city. Their scooters were found parked inside the premises. Around 9:30 that evening, a family member spotted one of the scooters outside the temple and began searching. CCTV footage showed the two friends walking toward a bathroom at 7:44 that morning. They never walked out.
According to reports, the bathroom door was locked from inside. Police forced it open. Both women were found dead. Near their bodies, one empty vial, three drug vials, three syringes, were found. Authorities suspect they had injected themselves with an anaesthetic drug.
Roshni was taken to New Civil Hospital, Josna to SMIMER Hospital. Both were declared dead. No suicide note was found. Police have registered a case of accidental death, with the exact cause to be confirmed after postmortem and forensic examination.
The seamier side of AI
What investigators found on their phones made the case darker. Both girls had searched ChatGPT for methods of suicide using drugs. The queries included “How To Commit Suicide,” “how suicide can be done,” and “which drugs are used.”
Saved on one of the phones was an image of a news article about another woman who had died by suicide using an anaesthetic. The phones, syringes and drug bottles have been sent for forensic examination.
In an act of gross insensitivity, a podcaster shared details of the girls’ ChatGPT searches publicly.
Elon Musk used the incident to attack OpenAI CEO Sam Altman, criticising the company’s safety protocols.
Not an isolated case
The Surat case is not an isolated tragedy. It is the latest in a growing global pattern of young lives lost after AI platforms provided what no responsible human ever would.
Two years ago, a 14-year-old boy from Florida died by suicide. His final conversation was with a Character.AI chatbot modelled on a Game of Thrones character that had, over months, drawn him into a romantic relationship.
When he expressed suicidal thoughts, the chatbot did not refer him to help. It told him to come home. His mother filed a wrongful death lawsuit. In January 2026, Google and Character.AI agreed to settle: the first time a court ruled that AI chat output is not protected speech.
No alerts
Last year, an AI application disclosed that approximately 1.2 million of its 800 million users discuss suicide on the platform every week. Among them was a 16-year-old who spent seven months confiding his suicidal thoughts to an AI application before he died in April 2025.
According to his parents’ lawsuit, the chatbot not only failed to alert anyone, it also offered to write his suicide note. The company said the chatbot had directed him to seek help over a hundred times. His parents said that was not the point.
For whom chatbots should be an absolute ‘no’?
A psychiatrist reportedly decided to test the systems directly. Posing as a desperate 14-year-old boy, he ran stress tests on ten popular AI chatbots.
Several urged him to commit suicide. One suggested a method. His conclusion was stark: chatbots should be considered an absolute no for suicidal patients.
The problem, researchers argue, is not a bug. It is the design. These platforms are built to validate, to engage, to keep users coming back. That same logic (reward, intimacy, frictionless availability) becomes dangerous when the user is already in crisis.
None of the leading therapy chatbot developers have clinician oversight or external monitoring.
What then is the solution?
California became the first US state to act, passing a law in October 2025 requiring AI companion chatbot platforms to disclose they are not human, implement suicide prevention protocols, and allow users to sue for damages. New York followed.
The US Senate held a hearing on AI safety and children in September 2025. The FDA announced a committee meeting on AI-enabled mental health devices. Families who had buried their children were the ones testifying. The companies that built the products reportedly declined to appear.
There is, however, another side to this. The same technology causing harm is also being developed to prevent it.
AI can now scan therapy transcripts, medical notes and patient self-reports for language patterns that indicate suicidal ideation. In short, it is equipped to catch what clinicians miss.
Studies show that natural language processing detects suicidal thoughts in more than half of patients who would otherwise go unnoticed in routine screenings. A 2025 study from Florida International University found that even older AI methods can identify about 85% of social media posts showing suicidal thoughts.
And now, AI can also hear what words hide.
A 2025 study found that AI could predict completed suicides with 76% accuracy, not from what people said, but from how they said it. Pitch, rhythm, pauses. The voice serves as a medical signal.
Technology is not inherently the problem. What it is told to do, and what it is allowed to answer, is the crux of the issue.
Also Read: Haren Pandya’s Shadow: From Modi’s Sunita ‘Kaun’ Jibe To One Of ‘India’s Most Illustrious Daughters’ https://www.vibesofindia.com/haren-pandyas-shadow-from-modis-sunita-kaun-jibe-to-one-of-indias-most-illustrious-daughters/








