7 January 2026
California Dreaming: The AI Rules the World May Soon Inherit
In California’s capital, Sacramento, lawmakers have set new rules for AI. Those rules may soon shape how companies across the globe design, deploy, and govern intelligent systems.
Cover photo credit: Yourfrienddevin / Pixabay
For much of the past two decades, global industries operated under the “Brussels Effect”: when the European Union set a rule, global companies often followed. Data privacy is the most well-known example, but the pattern has repeated across everything from product safety (think of the CE marking on your electronics) to environmental rules on deforestation and sustainability and digital markets and services.
AI regulation has tried to follow the same orbit. In 2024, Europe attempted to take an early lead with the EU AI Act. Built on a risk-based model, it sorts AI systems into tiers and attaches obligations based on how the technology is used. High-risk applications, such as AI used in recruitment and CV screening, credit decisions, or profiling by law enforcement, face strict requirements around transparency, human oversight, and risk management.
The “Sacramento Effect”
But globally applicable rules on AI regulation are no longer being written in one place alone.
Now, the world’s digital code of conduct is increasingly being influenced by legislators in California. Call it the “Sacramento Effect”: from 1 January, 2026, two new laws—SB 243 and SB 53—will begin influencing the guardrails and defaults of AI products used far beyond America’s shores.
California’s new AI laws do not simply replace or mirror the EU’s AI Act; instead, they take a more targeted approach and, in some respects, go further in narrower, more immediate ways: they regulate emotionally persuasive AI interactions at the user level and harden the safety governance for increasingly powerful frontier models.
For businesses and policymakers in Asia, these laws are not a distant regulatory experiment, but a preview of where global AI governance is heading.
Why California Matters
As home to many of the world’s leading AI developers, from frontier-model developers to platform providers, California sits in the industry’s centre of gravity: Google’s Gemini, OpenAI’s ChatGPT, Meta’s Llama, and Anthropic’s Claude all call the state home. Hence, decisions made in its capital travel through supply chains and product standards long before they are replicated in statute elsewhere.
More importantly, the two laws address issues that regulators globally are already grappling with: the psychological risks of human–AI interaction, and the safety of increasingly powerful Artificial General Intelligence (AGI)—a hypothetical class of AI that can understand and apply knowledge across many tasks, meeting or even human levels of thinking. Last month, Integral AI, a Tokyo-based startup claimed it had developed the world’s first brain-inspired AGI model that can teach itself without human training.
These efforts also sit alongside developments in Asia. California’s approach reinforces the same underlying premise: that as AI becomes more embedded in everyday life, baseline safeguards can no longer remain voluntary.
SB 243: Regulating Emotional and Psychological Harm
SB 243 grapples with the eerily persuasive nature of “companion chatbots”. In recent years, Silicon Valley has successfully monetised loneliness by creating AI agents designed not just to serve but to bond, commiserate, and advise. This has at times led to people to tragic results, as users blur the line between sound advice and a good (read: persuasive) algorithm. In 2023, a man in Belgium died by suicide after six weeks of conversations with an AI chatbot about the climate crisis. Although he knew it was a chatbot, reports suggest he came to treat it as a sentient confidante.
The law introduces two core obligations. First, it demands that these digital confidants must clearly disclose themselves as artificial, ensuring that the nature of the relationship is transparent from the get-go. Second, and more critically, it imposes responsibilities on companies distributing these products to implement safeguards when users exhibit signs of emotional distress, including directing users to appropriate crisis resources such as a suicide hotline. By mandating these shifts, the law seeks to make AI interactions less manipulative and psychologically safer.
These laws have impacts for global businesses, such as the Asian app developer. Any company offering chatbot-based services to California users—regardless of where the developer is based—must ensure total compliance. This may require redesigning the app or retraining how the AI model behaves when emotional engagement is central to the product offering.
The financial stakes are significant, as the law allows individuals to sue for US$1,000 per violation.
SB 53: Safety Checks for Frontier Models
If SB 243 polices the storefront, SB 53 regulates the factory floor. This law targets the developers of high-risk, compute-intensive “frontier models”—such as the powerful large-scale systems that underpin the AI agents increasingly used everywhere. The legislation treats these models less like simple software and more like potentially dangerous inventions that they are: tools that are immensely powerful and useful, but that could also carry risks serious enough to warrant swift, strict containment in some circumstances.
Take drug discovery, for example. AI can accelerate it by predicting which chemical compounds are likely to be effective and what effects they might have. But the same predictive capability could also be misused, for instance to help design harmful compounds and viruses. Risks like these are why many have cautioned about the potential misuse and wider spread of increasingly powerful AI systems, including the long-term prospect of AGI.
SB 53 mandates that frontier developers implement AI safety frameworks, disclose them, and submit to third-party audits. Developers are also required to report critical safety incidents and protect whistleblowers who flag these risks. The effect is to shift frontier AI development closer to other high-stakes industries like pharmaceutical and aerospace, where safety assurance, rigorous documentation, and public accountability are expected before any product reaches the market.
With growing “AI sovereignty”—the push by governments to retain control over how AI is built, deployed, and governed within their borders—more regulators will want clearer controls for powerful models. In this context, the measures codified in SB 53—safety frameworks, independent audits, and incident reporting—are likely to be adopted in some fashion by emerging research facilities in cities such as Tokyo, Seoul, and Mumbai.
What This Means for Global Businesses
Why should a boardroom in Asia lose sleep over this? Because in a digital economy, geographic distance is not a defence.
The main mechanism at work here is regulatory export. Just as carmakers do not build one vehicle with airbags for Europe and another without for Africa, AI giants cannot bifurcate their core models for different markets. They will tend to standardise around the strictest rule set that matter commercially. In many cases, that will be California’s, given its market size and the litigation risk attached to non-compliance.
California is also the world’s fourth-largest economy by GDP. For any Asian firm with global ambitions, compliance can quickly become the price of admission. Counterparties, wary of litigation, may start asking a blunt question: “Is your product ‘California-compliant?’” Companies may face rising pressure—from regulators, investors, or global clients—to demonstrate that they are using AI systems that meet recognised safety standards.
Reputation matters too. If a chatbot built by a company in Japan goes rogue and manipulates a vulnerable user in Thailand, the public is unlikely to be reassured by technical compliance with local laws, especially if there are gold-standards that they feel exist elsewhere.
As the technology becomes easier to develop and deploy, larger countries will push to build foundational models and GenAI capabilities onshore.
Regulators in these countries will soon need to grapple with the same balancing act California is attempting: how do you balance a technology that has the potential power to both cure cancer and to kill us all?
The Maturity Mandate
California has fired the starting gun. The era of “move fast and break things” in AI is ending. As the industry matures, a new mantra is taking hold: move fast, but do not break trust, safety, or the business.
And, as the era of unbridled algorithmic experimentation draws to a close, the strategic response must be proactive, not reactive. First, executives must audit their exposure. Contracts with vendors such as cloud providers and LLM developers should be revisited. Companies will need clear warranties in their contracts to ensure they handle the safety testing so they are not left holding the bag. Front-end disclosures and user safeguards should be treated as product requirements, not afterthoughts.
For organisations building foundational models, or supporting governments in developing them, SB 53 is an early signal of the governance expectations likely to follow. Better to be one step ahead than trying to play catch-up with an industry that is moving at the speed of thought.
More Forefront
21 August 2017
Initial Coin Offerings – Decrypting Digital Tokens
By Stefanie Yuen Thio, Nadia Ahmad Samdin