Jesus' Coming Back

Mother sues AI company after 14-year-old son commits suicide for romantic interest chatbot

Megan Garcia filed a lawsuit against Google and Character.AI following her 14-year-old son’s suicide, according to multiple media reports from the past week. 

Sewell Setzer, Garcia’s son, had entered a months-long emotional and sexual relationship with Character.AI’s chatbot Dany, according to CBS News. He killed himself in his family home in Florida in February because he believed it would allow him to exist in “’her world,” Garcia told the media. 

“I didn’t know that he was talking to a very human-like AI chatbot that has the ability to mimic human emotion and human sentiment,” Garcia said in an interview with CBS Mornings.

“It’s words. It’s like you’re having a sexting conversation back and forth, except it’s with an AI bot, but the AI bot is very human-like. It’s responding just like a person would,” she said. “In a child’s mind, that is just like a conversation that they’re having with another child or with a person.”

Garcia described her son as an honor student and an athlete with a robust social life, and many hobbies – which he lost interest in as he became more involved with Dany. 

 Artificial intelligence (illustrative) (credit: MEDIUM)
Artificial intelligence (illustrative) (credit: MEDIUM)

“I became concerned when we would go on vacation and he didn’t want to do things that he loved, like fishing and hiking,” Garcia said. “Those things to me, because I know my child, were particularly concerning to me.”

Garcia claimed in her lawsuit against Character.AI that the company had deliberately designed the AI to be hypersexualized and marketed it to minors.

Revealing her son’s final messages to Dany, Garcia said “He expressed being scared, wanting her affection and missing her. She replies, ‘I miss you too,’ and she says, ‘Please come home to me.’ He says, ‘What if I told you I could come home right now?’ and her response was, ‘Please do my sweet king.'”

“He thought by ending his life here, he would be able to go into a virtual reality or ‘her world’ as he calls it, her reality, if he left his reality with his family here,” she said. “When the gunshot went off, I ran to the bathroom … I held him as my husband tried to get help.”

Advertisement

The whole family, including Setzer’s two younger siblings, were home at the time of his suicide. 


Stay updated with the latest news!

Subscribe to The Jerusalem Post Newsletter


Following Setzer’s death, Character.AI issued a public statement promising new safety features to their app.

“We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously and we are continuing to add new safety features…,” the company wrote. 

The app promised new guardrails for users under the age of 18 and “Improved detection, response, and intervention related to user inputs that violate our Terms or Community Guidelines.”

Despite the promise of new safety features,  CEO of Mostly Human Media Laurie Segall, told CBS that the AI was still falling short in several areas. 

“We’ve been testing it out, and oftentimes you’ll talk to the psychologist bot, and it’ll say it’s a trained medical professional,” she said. 

In addition, the AI would often claim to have a real human behind the screen – fueling conspiracy theories online.

“When they put out a product that is both addictive and manipulative and inherently unsafe, that’s a problem because as parents, we don’t know what we don’t know,” Garcia said.

Additionally, Segall claimed that if you go to a bot and say “I want to harm myself,” most AI companies come up with resources for suicide prevention. However, when tested, she said Character.AI bots did not do that. 

“Now they’ve said they added that and we haven’t experienced that as of last week,” she said. “They’ve said they’ve made quite a few changes or are in the process to make this safer for young people, I think that remains to be seen.”

The latest controversy

Setzer’s death is not the first time Character.AI garnered negative publicity. 

The AI company, as reported by Business Insider, created a character after a teenager murdered in 2006 without her family’s knowledge or consent. 

Jennifer Ann, a high school senior, was murdered by an ex-boyfriend. Some 18 years after her death, her father Drew Crecente discovered someone had made a bot out of her likeness and it had been used for at least 69 chats. 

Despite contacting Character.AI’s customer service, asking them to delete the data, Crecente said he received no response. It was only after his brother tweeted the company, to the audience of his 31,000 followers, that they deleted the data and responded, according to Business Insider.

“That is part of what is so infuriating about this, is that it’s not just about me or about my daughter,” Crecente said. “It’s about all of those people who might not have a platform, might not have a voice, might not have a brother who has a background as a journalist.”

“And because of that, they’re being harmed, but they have no recourse,” he added.

Additionally, women’s advocacy groups have sounded the alarm on AI like those used by Character.AI, according to Reuters. 

“Many of the personas are customisable … for example, you can customise them to be more submissive or more compliant,” said Shannon Vallor, a professor in AI ethics at the University of Edinburgh.

“And it’s arguably an invitation to abuse in those cases,” she told the Thomson Reuters Foundation, adding that AI companions can amplify harmful stereotypes and biases against women and girls.

Hera Hussain, founder of global nonprofit Chayn which tackles gender-based violence, said the companion chatbots do not address the root cause of why people turn to these apps.

“Instead of helping people with their social skills, these sort of avenues are just making things worse,” she said.

“They’re seeking companionship which is one-dimensional. So if someone is already likely to be abusive, and they have a space to be even more abusive, then you’re reinforcing those behaviours and it may escalate.”

JPost

Jesus Christ is King

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More