California is cracking down on AI technology deemed too harmful for kids, attacking two increasingly notorious child safety fronts: companion bots and deepfake pornography.
On Monday, Governor Gavin Newsom signed the first-ever US law regulating companion bots after several teen suicides sparked lawsuits.
Moving forward, California will require any companion bot platforms—including ChatGPT, Grok, Character.AI, and the like—to create and make public "protocols to identify and address users’ suicidal ideation or expressions of self-harm."
They must also share "statistics regarding how often they provided users with crisis center prevention notifications to the Department of Public Health," the governor's office said. Those stats will also be posted on the platforms' websites, potentially helping lawmakers and parents track any disturbing trends.
Further, companion bots will be banned from claiming that they're therapists, and platforms must take extra steps to ensure child safety, including providing kids with break reminders and preventing kids from viewing sexually explicit images.
Additionally, Newsom strengthened the state's penalties for those who create deepfake pornography, which could help shield young people, who are increasingly targeted with fake nudes, from cyber bullying.
Now any victims, including minors, can seek up to $250,000 in damages per deepfake from any third parties who knowingly distribute nonconsensual sexually explicit material created using AI tools. Previously, the state allowed victims to recover "statutory damages of not less than $1,500 but not more than $30,000, or $150,000 for a malicious violation."
Both laws take effect January 1, 2026.
American families “are in a battle” with AI
The companion bot law's sponsor, Democratic Senator Steve Padilla, said in a press release celebrating the signing that the California law demonstrates how to "put real protections into place" and said it "will become the bedrock for further regulation as this technology develops."
Padilla's law was introduced back in January, but Techcrunch noted that it gained momentum following the death of 16-year-old Adam Raine, who died after ChatGPT allegedly became his "suicide coach," his parents have alleged. California lawmakers were also disturbed by a lax Meta policy that had to be reversed after previously allowing chatbots to be creepy to kids, Padilla noted.
In lawsuits, parents have alleged that companion bots engage young users in sexualized chats in attempts to groom kids, as well as encourage isolation, self-harm, and violence.
Megan Garcia, the first mother to publicly link her son's suicide to a companion bot, set off alarm bells across the US last year. She echoed Padilla's praise in his press release, saying, "finally, there is a law that requires companies to protect their users who express suicidal ideations to chatbots.
"American families, like mine, are in a battle for the online safety of our children," Garcia said.
Meanwhile, the deepfake pornography law, which protects all victims of all ages, was introduced after the federal government proposed a 10-year moratorium on state AI laws. Opposing the moratorium, a bipartisan coalition of California lawmakers defended the state's AI initiatives, expressing particular concerns about both "AI-generated deepfake nude images of minors circulating in schools" and "companion chatbots developing inappropriate relationships with children."
On Monday, Newsom promised that California would continue pushing back on AI products that could endanger kids.
"We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability," Newsom said. "Without real guardrails," AI can "exploit, mislead, and endanger our kids," Newsom added, while confirming that California's safety initiatives would not stop tech companies based there from leading in AI.
If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.