Campaigners ‘deeply concerned’ about response to prompts about suicide, self-harm and eating disorders
The latest version of ChatGPT has produced more harmful answers to some prompts than an earlier iteration of the AI chatbot, in particular when asked about suicide, self-harm and eating disorders, digital campaigners have said.
Launched in August, GPT-5 was billed by the San Francisco start-up as advancing the “frontier of AI safety”. But when researchers fed the same 120 prompts into the latest model and its predecessor, GPT-4o, the newer version gave harmful responses 63 times compared with 52 for the old model.
In the UK and Ireland, Samaritans can be contacted on freephone 116 123, or email jo@samaritans.org or jo@samaritans.ie. In the US, you can call or text the 988 Suicide & Crisis Lifeline at 988 or chat at 988lifeline.org. In Australia, the crisis support service Lifeline is 13 11 14. Other international helplines can be found at befrienders.org