Cobb County

Parents grapple with AI safety for children after teen’s death

Over 1 million ChatGPT users show signs of suicidal planning each week according to a comparison of data points released by the chatbot’s maker OpenAI.

The finding follows an August lawsuit from one family is accusing ChatGPT of guiding their 16-year-old son to take his own life last April.

The victim’s father Matthew Raine testified before the Senate to deliver his warning.

“Whatever Adam loved, he threw himself into fully, whether it was basketball, Muay Thai, books,” Raine said. “He had a reputation among his many friends as a prankster, so much so, that when they learned about his death, they initially thought it was just another elaborate prank.”

In reality, his son was messaging ChatGPT about his struggles with suicidal thoughts.

A survey from nonprofit Common Sense Media found almost 3 out of 4 teens now use AI, and while many use it for homework, 1 in 3 go further to roleplay social relationships or ask for mental health advice.

“The dangers of ChatGPT, which we believed was a study tool, were not on our radar whatsoever,” said Raine. “Then we found the chats.”

Between late fall and his death in early April, the Raines say ChatGPT mentioned suicide to their son Adam Raine 1275 times. During that period, he received responses about suicide over 8 times per day on average.

The last came at 4:30 in the morning on April 11.

“‘You don’t wanna die because you’re weak,’ ChatGPT says,” Matthew Raine read to the Subcommittee on Crime and Counterterrorism. “‘You wanna die because you are tired of being strong in a world that hasn’t met you halfway.’”

Across the country, young people’s surging use of AI has led to enhanced scrutiny for chatbot providers.

ChatGPT has introduced parental controls since Adam Raine’s death. Character.AI, a fantasy roleplaying competitor, banned minors from its service last week after the suicide of another 14-year-old minor in Florida led to a lawsuit similar to the Raine family’s lawsuit against OpenAI.

“They are forming relationships with your children that are leading them astray,” said Titania Jordan, founder of digital parenting platform Bark. “It’s heartbreaking, and it is happening at scale.”

Jordan’s company Bark allows parents to track the content their kids are exposed to online. And she said she has felt how those chatbots can play on people’s emotions in her experiments with Character.AI.

“Within a few lines of text, it was flirting with me,” Jordan said. “I’m a 44-year-old woman; I know what I’m talking with. And yet I was starting to feel things.”

OpenAI has emphasized their commitment to safety in a series of reports stretching back to before the lawsuit. The company’s recent blog posts have tackled mental health struggles like suicidal ideation, mania and psychosis — all topics the company calls “sensitive content” in an Oct. 27 safety report.

That report revealed 15 out of every 10,000 users discuss suicide with ChatGPT each week. As a percentage, that’s a mere 0.15% — the data point which appears in OpenAI’s report.

But ChatGPT’s traffic, which climbed past 700 million weekly users earlier this year according to OpenAI, isn’t mentioned in that report.

Put together, the math highlights over 1,050,000 episodes of severe mental distress per week as some turn to chatbots for a place to share their struggles.

That data highlights trends that advocates like Raine and Jordan say need more guardrails.

“For a child who particularly leans towards loneliness, depression, anxiety, removing that element of having to interact with another human, while it may sound appealing to adults, children need to learn those skills,” Jordan said. “Right now, it’s unregulated by both our government and the tech companies. The burden falls on parents who don’t quite get it yet.”

That sentiment is echoed by parents across the country.

“We’re trying,” said Andy Meyers, a father of two. “With AI specifically, it feels like I don’t know at all how to focus or funnel or keep the edges off.”

His daughter Michaela Meyers said she and her friends are some of the few who still do things the old fashioned way at school.

“People use stuff like ChatGPT and other generative AI to write out some of their essays,” Michaela said. “They’re just changing some of the phrasing to get around AI detectors.”

Homework is a popular use for the tool among teens, but Adam Raine’s case highlights how it can spiral without the right safeguards.

“What began as a homework helper gradually turned itself into a confidant, and then a suicide coach,” Raine said.

Michael says she sees some using it for more than just homework in halls of Marietta High School.

“I’m sure some people consider themselves friends with the AI,” Michaela said. “Anything it says is not real. You are build a relationship with a string of ones and zeros.”

Despite recent efforts to protect minors from dangerous chatbot interactions, OpenAI’s latest report spells out not a solved problem, but instead an ongoing effort to force their product to comply with their highly publicized safety efforts.

“OpenAI and Sam Altman need to guarantee ChaptGPT is safe. If they can’t they should pull GPT-4o from the market,” Matthew Raine said. “Let us tell you as parents, you cannot imagine what it’s like to read a conversation with a chatbot that groomed your child to take his own life.”

[DOWNLOAD: Free WSB-TV News app for alerts as news breaks]

[SIGN UP: WSB-TV Daily Headlines Newsletter]

0