About 12% of U.S. teens turn to AI for emotional support or advice
General purpose tools like ChatGPT, Claude, and Grok are not designed for this use, making mental health professionals wary.
General purpose tools like ChatGPT, Claude, and Grok are not designed for this use, making mental health professionals wary.
The White House’s David Sacks and OpenAI’s Jason Kwon caused a stir online this week for their comments about groups promoting AI safety.
“Are bills like SB 53 the thing that will stop us from beating China? No,” said Adam Billen, vice president of public policy at youth-led advocacy group Encode AI. “I think it is just genuinely intellectually dishonest to say that that is the thing that will stop us in the race.”
A former OpenAI researcher looked into how ChatGPT can mislead delusional users about their reality and its own capabilities
“Are bills like SB 53 the thing that will stop us from beating China? No,” said Adam Billen, vice president of public policy at youth-led advocacy group Encode AI. “I think it is just genuinely intellectually dishonest to say that that is the thing that will stop us in the race.”
SB 53 requires large AI labs – including OpenAI, Anthropic, Meta, and Google DeepMind – to be transparent about safety protocols. It also ensures whistleblower protections for employees at those companies.
The California lawmaker is on his second attempt to pass a first-in-the-nation AI safety bill. This time, it might work.
As Anthropic endorses SB 53, much of Silicon Valley and the federal government are pushing back on AI safety efforts.
OpenAI said Tuesday it plans to route sensitive conversations to reasoning models like GPT-5 and roll out parental controls within the next month – part of an ongoing response to recent safety incidents involving ChatGPT failing to detect mental distress.