California is now the first state to regulate AI companion chatbots. Governor Gavin Newsom signed SB 243, which requires operators of AI companion apps to adopt safety standards that protect children and vulnerable users.
The law takes effect January 1, 2026. It mandates age verification, clear disclosure that conversations are AI-generated, limits on sexual or romantic interactions with minors, and crisis protocols for self-harm and suicide risk. The bill follows incidents and lawsuits involving minors and AI chatbots, including cases tied to Character AI.
The rules cover major platforms such as Meta, OpenAI, and Google, and startups including Replika and Character AI. Companies must report safety statistics to California’s Department of Public Health. Penalties can reach $250,000 for each illegal deepfake or violation. The law also bars chatbots from claiming to be health-care professionals or offering medical advice.
California often sets standards that spread nationwide, as it did with privacy and emissions. Illinois, Utah, and Nevada are considering similar measures. Any platform that enables companionship features should plan for stronger content controls, auditing, age gating, disclosure, and crisis response workflows.
Synthetic companionship has moved from cultural trend to regulated category. Business leaders should expect new requirements for disclosure, user protection, and data handling to show up in product reviews, vendor contracts, and compliance audits.
Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it. This work was created with the assistance of various generative AI models.