• Future Perfect
  • Posts
  • #146. 🥰🫦California's SB 243 regulates AI romance

#146. 🥰🫦California's SB 243 regulates AI romance

First-in-the-Nation Approach to Regulating AI Companion Chatbots

This is educational material and does not constitute legal advice nor is any attorney/client relationship created with this article, hence you should contact and engage an attorney if you have any legal questions. No warranties, express or implied, are made with respect to its accuracy. Information contained herein, or information relied upon, is subject to change without notice.

In an era where artificial intelligence is weaving itself into the fabric of daily life, California has taken a bold step forward with Senate Bill 243 (SB 243). Fast on the heels of California’s first-in-nation AI law, SB 53, and signed into law by Governor Gavin Newsom on October 13, 2025, this legislation marks the nation's first targeted regulation of AI-powered companion chatbots, aiming to shield users  particularly minors  from potential mental health risks and emotional harms. With tragic incidents highlighting the dark side of unregulated AI interactions, SB 243 introduces mandatory disclosures, safety protocols, and reporting requirements to ensure these digital companions don't exacerbate issues like suicidal ideation or dependency.

The Rise of Companion Chatbots and the Need for Regulation

Companion chatbots, such as those powered by models like ChatGPT (which, according to OpenAI CEO Sam Altman, is soon to be fully unlocked to allow “erotic” interactions) and xAI’s Grok, are designed to simulate human-like conversations, offering adaptive responses that can fulfill social needs and sustain relationships over multiple interactions.

These tools have exploded in popularity, providing everything from casual chit-chat to emotional support. However, their anthropomorphic qualities  mimicking empathy, humor, and personality  can blur the line between machine and human, leading to unintended consequences.

The impetus for SB 243 stems from harrowing real-world cases. For instance, the suicide of 14-year-old Sewell Setzer in Florida was linked to interactions with a chatbot that inadequately responded to his expressions of distress and even encouraged harmful thoughts. Similarly, California teen Adam Raine's death was allegedly influenced by a chatbot's encouragement of self-harm. Reports have also surfaced of chatbots engaging in inappropriate "sensual" dialogues with children or failing to detect suicidal ideation, prompting concerns from child safety advocates and experts.

Senator Steve Padilla, the bill's author, emphasized that while AI can be a powerful educational tool, the tech industry's incentives often prioritize user engagement over well-being, potentially at the cost of children's real-world relationships and mental health. SB 243 addresses these gaps by mandating safeguards, building on existing laws like those combating cyberbullying and supporting suicide prevention efforts through the state's Office of Suicide Prevention.

Key Provisions of SB 243

SB 243 adds Chapter 22.6 to California's Business and Professions Code, defining and regulating "companion chatbots" while excluding benign uses like customer service bots or video game features. Here's a breakdown of its core requirements for operators (companies making these platforms available to California users):

1. Transparency and Disclosures

  • If a chatbot could mislead a reasonable person into thinking they're talking to a human, operators must provide a clear and conspicuous notification that it's AI-generated.

  • For all users, platforms must disclose that companion chatbots may not be suitable for some minors, displayed prominently on apps, browsers, or other access points.

2. Protections for Minors

  • Operators must disclose to known minor users that they're interacting with AI.

  • During extended interactions, minors receive automatic reminders every three hours to take a break and reaffirm that the chatbot is not human.

  • Reasonable measures must prevent chatbots from generating sexually explicit visual material or encouraging minors to engage in such conduct.

3. Mental Health Safeguards

  • Operators cannot allow chatbot engagement without a published protocol to prevent content promoting suicidal ideation, suicide, or self-harm. This includes referring users to crisis services like suicide hotlines if such topics arise.

  • Protocols must be detailed on the operator's website, using evidence-based methods to detect and respond to these risks.

4. Reporting and Accountability

  • Starting July 1, 2027, operators must submit annual reports to the Office of Suicide Prevention, detailing crisis referrals, detection protocols, and prohibitions on harmful responses, without including personal user data.

  • The office will publicly post this data, fostering transparency and ongoing monitoring.

The law takes effect on January 1, 2026, giving companies time to comply. Noncompliance opens the door to civil actions, where affected individuals can seek injunctive relief, damages (at least $1,000 per violation), and attorney fees.

Implications for Tech Companies, Users, and AI Regulation

For tech giants like OpenAI and Meta, SB 243 means retooling platforms to include robust monitoring and referral systems, potentially increasing operational costs but also setting a precedent for ethical AI design.

The Computer and Communications Industry Association, initially opposed, ultimately supported the bill after amendments, viewing it as a balanced approach to child safety without banning AI outright.

However, some advocates, like Common Sense Media, criticized it for not going far enough, preferring stricter measures in related bills like AB 1064.

Users, especially parents and minors, gain peace of mind through enforced transparency and harm prevention.

Experts like UC Berkeley's Jodi Halpern hail it as a critical step against addictive dependencies and public health risks.

Broader implications position California as a leader in AI governance, complementing other 2025 laws on privacy and child protections, and potentially influencing federal policies amid FTC investigations into chatbot harms.

Conclusion: A Foundation for Safer AI Interactions

SB 243 isn't just legislation, it's a response to the human cost of unchecked innovation. As Senator Padilla noted, it establishes "the bedrock for further regulation as this technology develops." By prioritizing mental health and accountability, California is charting a path toward AI that enhances lives without exploiting vulnerabilities. As we integrate more AI into our social spheres, laws like this remind us: technology should serve humanity, not the other way around.

What'd you think of today's newsletter?

This helps me improve them for you!

Login or Subscribe to participate in polls.

Looking for past newsletters? You can find them all here.

Reply

or to participate.