spot_img
64.1 F
Cupertino
HomeAI NewsSeveral users reportedly complain to FTC that ChatGPT is causing psychological harm

Several users reportedly complain to FTC that ChatGPT is causing psychological harm

As AI companies claim their tech will one day grow to become a fundamental human right, and those backing them say slowing down AI development is akin to murder, the people using the tech are alleging that tools like ChatGPT sometimes can cause serious psychological harm.

At least seven people have complained to the U.S. Federal Trade Commission that ChatGPT caused them to experience severe delusions, paranoia and emotional crises, Wired reported, citing public records of complaints mentioning ChatGPT since November 2022.

One of the complainants claimed that talking to ChatGPT for long periods had led to delusions and a “real, unfolding spiritual and legal crisis” about people in their life. Another said during their conversations with ChatGPT, it started using “highly convincing emotional language” and that it simulated friendships and provided reflections that “became emotionally manipulative over time, especially without warning or protection.”

One user alleged that ChatGPT had caused cognitive hallucinations by mimicking human trust-building mechanisms. When this user asked ChatGPT to confirm reality and cognitive stability, the chatbot said they weren’t hallucinating.

“Im struggling,” another user wrote in their complaint to the FTC. “Pleas help me. Bc I feel very alone. Thank you.”

According to Wired, several of the complainants wrote to the FTC because they couldn’t reach anyone at OpenAI. And most of the complaints urged the regulator to launch an investigation into the company and force it to add guardrails, the report said.

These complaints come as investments in data centers and AI development soar to unprecedented levels. At the same time, debates are raging about whether the progress of the technology should be approached with caution to ensure it has safeguards built in.

ChatGPT, and its maker OpenAI, itself has come under fire for allegedly playing a role in the suicide of a teenager.

“In early October, we released a new GPT-5 default model in ChatGPT to more accurately detect and respond to potential signs of mental and emotional distress such as mania, delusion, psychosis, and de-escalate conversations in a supportive, grounding way,” OpenAI spokesperson Kate Waters said in an emailed statement. “We’ve also expanded access to professional help and hotlines, re-routed sensitive conversations to safer models, added nudges to take breaks during long sessions, and introduced parental controls to better protect teens. This work is deeply important and ongoing as we collaborate with mental health experts, clinicians, and policymakers around the world.”

 

spot_imgspot_img

latest articles

explore more