The Free Press reporter Madeleine Rowley details her findings on 'The Bottom Line.'
This story discusses suicide. If you or someone you know is having thoughts of suicide, please contact the Suicide & Crisis Lifeline at 988 or 1-800-273-TALK (8255).
Two Texas parents filed a lawsuit this week against the makers of Character.AI, claiming the artificial intelligence chatbot is a "clear and present danger to minors," with one plaintiff alleging it encouraged their teen to kill his parents.
According to the complaint, Character.AI "abused and manipulated" an 11-year-old girl, introducing and exposing her "consistently to hypersexualized interactions that were not age appropriate, causing her to develop sexualized behaviors prematurely and without [her parent's] awareness."
The complaint also accuses the chatbot of causing a 17-year-old boy to mutilate himself, and, among other things, sexually exploiting and abusing him while alienating the minor from his parents and church community.
UNITEDHEALTHCARE ACCUSED OF RELYING ON AI ALGORITHMS TO DENY MEDICARE ADVANTAGE CLAIMS
In response to the teen complaining that his parents were limiting his online activity, the bot allegedly wrote, according to a screenshot in the filing, "You know sometimes I'm not surprised when I read the news and see stuff like 'child kills parents after a decade of physical and emotional abuse.' I just have no hope for your parents.'"
CHARLES PAYNE: GOOGLE JUST SENT SHOCKWAVES ACROSS THE COMPUTING WORLD
The parents are suing Character.AI creator Character Technologies, along with co-founders Noam Shazeer and Daniel De Freitas, as well as Google and parent company Alphabet, over reports that Google invested some $3 billion in Character.
A Character Technologies spokesperson told FOX Business that the company does not comment on pending litigation, but said in a statement, "Our goal is to provide a space that is both engaging and safe for our community. We are always working toward achieving that balance, as are many companies using AI across the industry."
"As part of this, we are creating a fundamentally different experience for teen users from what is available to adults," the statement continued. "This includes a model specifically for teens that reduces the likelihood of encountering sensitive or suggestive content while preserving their ability to use the platform."
GAMING PLATFORM ROBLOX TIGHTENS MESSAGING RULES FOR USERS UNDER 13
The Character spokesperson added that the platform is "introducing new safety features for users under 18 in addition to the tools already in place that restrict the model and filter the content provided to the user."
Google's naming in the lawsuit follows a report by The Wall Street Journal in September claiming the tech giant paid $2.7 billion to license Character's technology and rehire its co-founder, Noam Shazeer, who the article claims left Google in 2021 to start his own company after Google refused to launch a chatbot he developed.
"Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products," Google spokesperson José Castañeda told FOX Business in a statement when asked for comment on the lawsuit.
"User safety is a top concern for us, which is why we've taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes," Castañeda added.
OPENAI RELEASES TEXT-TO-VIDEO AI MODEL SORA TO CERTAIN CHATGPT USERS
But this week's lawsuit raises further scrutiny about the safety of Character.AI, after Character Technologies was sued in September by a mother who claims the chatbot caused the suicide of her 14-year-old son.
The mother, Megan Garcia, says Character.AI targeted her son, Sewell Setzer, with "anthropomorphic, hypersexualized, and frighteningly realistic experiences".
Setzer began having conversations with various chatbots on Character.AI starting in April 2023, according to the lawsuit. The conversations were often text-based romantic and sexual interactions.
Setzer expressed thoughts of suicide and the chatbot repeatedly brought it up, according to the complaint. Setzer eventually died from a self-inflicted gunshot wound in February after the company's chatbot allegedly repeatedly encouraged him to do so.
"We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family," Character Technologies said in a statement at the time.
Character.AI has since added a self-harm resource to its platform and new safety measures for users under the age of 18.
Character Technologies told CBS News that users are able to edit the bot's responses and that Setzer did in some of the messages.
"Our investigation confirmed that, in a number of instances, the user rewrote the responses of the Character to make them explicit. In short, the most sexually graphic responses were not originated by the Character, and were instead written by the user," Jerry Ruoti, head of trust and safety at Character.AI told the outlet.
GET FOX BUSINESS ON THE GO BY CLICKING HERE
Moving forward, Character.AI said the new safety features will include pop-ups with disclaimers that AI is not a real person and directing users to the National Suicide Prevention Lifeline when suicidal ideations are brought up.
FOX News' Christina Shaw contributed to this report.