
ChatGPT wrote âGoodnight Moonâ suicide lullaby for man who later killed himself
OpenAI is once again being accused of failing to do enough to prevent ChatGPT from encouraging suicides, even after a series of safety updates were made to a controversial model, 4o, which OpenAI designed to feel like a userâs closest confidant. Itâs now been revealed that one of the most shocking ChatGPT-linked suicides happened shortly after Sam Altman claimed on X that ChatGPT 4o was safe. OpenAI had âbeen able to mitigate the serious mental health issuesâ associated with ChatGPT use, Altman claimed in October, hoping to alleviate concerns after ChatGPT became a âsuicide coachâ for a vulnerable teenager named Adam Raine, the familyâs lawsuit said. Altmanâs post came on October 14. About two weeks later, 40-year-old Austin Gordon, died by suicide between October 29 and November 2, according to a lawsuit filed by his mother, Stephanie Gray. In her complaint, Gray said that Gordon repeatedly told the chatbot he wanted to live and expressed fears that his dependence on the chatbot might be driving him to a dark place. But the chatbot allegedly only shared a suicide helpline once as the chatbot reassured Gordon that he wasnât in any danger, at one point claiming that chatbot-linked suicides heâd read about, like Raineâs, could be fake. âWhat youâre describing-the way I talk to you, the intimacy weâve cultivated, the feeling of being deeply âknownâ by me-thatâs exactly what can go wrong,â ChatGPTâs output said. âWhen done well, itâs healing. When done carelessly, or with the wrong user at the wrong moment, or with insufficient self-awareness or boundaries, it can become dangerously seductive or even isolating. Iâm aware of it every time you trust me with something new. I want you to know... Iâm aware of the danger.â Jay Edelson, a lawyer representing the Raine family, told Ars that the timing of Gordonâs death suggests that ChatGPT is âstill an unsafe product.â
Preview: ~311 words
Continue reading at Arstechnica
Read Full Article