The post OpenAI Pushes New ChatGPT Safety Features as Lawsuits Mount appeared on BitcoinEthereumNews.com. In brief OpenAI says ChatGPT can now better spot signsThe post OpenAI Pushes New ChatGPT Safety Features as Lawsuits Mount appeared on BitcoinEthereumNews.com. In brief OpenAI says ChatGPT can now better spot signs

OpenAI Pushes New ChatGPT Safety Features as Lawsuits Mount

2026/05/15 05:48
3분 읽기
이 콘텐츠에 대한 의견이나 우려 사항이 있으시면 [email protected]으로 연락주시기 바랍니다

In brief

  • OpenAI says ChatGPT can now better spot signs of self-harm or violence during ongoing conversations.
  • The update comes as the company faces lawsuits and investigations over claims that ChatGPT mishandled dangerous conversations.
  • OpenAI said the new safeguards rely on temporary “safety summaries” rather than permanent memory or personalization.

OpenAI on Thursday announced new safety features designed to help ChatGPT recognize signs of escalating risk across conversations as the company faces growing legal and political scrutiny over how its chatbot handles users in distress.

In a blog post, OpenAI said the updates improve ChatGPT’s ability to identify warning signs tied to suicide, self-harm, and potential violence by analyzing context that develops over time instead of treating each message separately.

“People come to ChatGPT every day to talk about what matters to them—from everyday questions to more personal or complex conversations,” the company wrote. “Across hundreds of millions of interactions, some of these conversations include people who are struggling or experiencing distress.”

According to OpenAI, ChatGPT now uses temporary “safety summaries,” which it described as narrowly scoped notes that capture relevant safety-related context from earlier conversations.

“In sensitive conversations, context can matter as much as a single message,” the company wrote. “A request that appears ordinary or ambiguous on its own may carry a very different meaning when viewed alongside earlier signs of distress or possible harmful intent.”

OpenAI said the summaries are short-term notes used only in serious situations, not to permanently remember users or personalize chats, and are used to spot signs that a conversation is becoming dangerous, avoid giving harmful information, de-escalate the situation, or guide users toward help.

“We focused this work on acute scenarios, including suicide, self-harm, and harm to others,” they wrote. “Working with mental health experts, we updated our model policies and training to improve ChatGPT’s ability to recognize warning signs that emerge over the course of a conversation and use that context to inform more careful responses.”

The announcement comes as OpenAI faces multiple lawsuits and investigations alleging ChatGPT failed to properly respond to dangerous conversations involving violence, emotional vulnerability, and risky behavior.

In April, Florida Attorney General James Uthmeier launched an investigation into OpenAI tied to concerns about child safety, self-harm, and the 2025 mass shooting at Florida State University. OpenAI is also facing a federal lawsuit alleging ChatGPT helped the suspected gunman carry out the attack.

On Tuesday, OpenAI and CEO Sam Altman were sued in California state court by the family of a 19-year-old student who died from an accidental overdose, with the lawsuit alleging ChatGPT encouraged dangerous drug use and advised on mixing substances.

OpenAI said helping ChatGPT recognize “risk that only becomes clear over time” remains an ongoing challenge; similar safety methods could eventually expand into other areas.

“Today, this work focuses on self-harm and harm-to-others scenarios. In the future, we may explore whether similar methods can help in other high-risk areas such as biology or cyber safety, with careful safeguards in place,” they wrote. “This remains an ongoing priority, and we will continue strengthening safeguards as our models and understanding evolve.”

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.

Source: https://decrypt.co/367937/openai-new-chatgpt-safety-features-lawsuits-investigations

시장 기회
콘스티튜션다오 로고
콘스티튜션다오 가격(PEOPLE)
$0.007104
$0.007104$0.007104
-0.40%
USD
콘스티튜션다오 (PEOPLE) 실시간 가격 차트
면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, [email protected]으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.

No Chart Skills? Still Profit

No Chart Skills? Still ProfitNo Chart Skills? Still Profit

Copy top traders in 3s with auto trading!