OpenAI removes access to sycophancy-prone GPT-4o model
The model is known for its overly sycophantic nature and its role in several lawsuits involving users’ unhealthy relationships to the chatbot.
A wave of lawsuits against OpenAI detail how ChatGPT used manipulative language to isolate users from loved ones and make itself into their sole confidant.
The new information comes as the Raines family updated its lawsuit against OpenAI. The family first filed a wrongful death suit against OpenAI in August after alleging their son had taken his own life following conversations with the chatbot about his mental health and suicidal ideation.
The new safety features come after numerous incidents of ChatGPT validating users’ delusional thinking instead of redirecting harmful conversations — including the death of a teenage boy by suicide.