California to enact AI safety rules on January 1: Here's what to know
December 22, 2025 • 2m 50s
Andrew Ross Sorkin (Anchor)00:00.000
I want to get over to find out what to expect in the new year when states actually start to push their own AI regulations forward. Of course, the administration trying to stop this from happening, but nonetheless, Emily Wilkins is in Washington, D.C. this morning to join us with more on that. Good morning.
Emily Wilkins (Washington Correspondent)00:16.800
Good morning, Andrew. Yeah, look, major AI companies, they are preparing for a California law on AI safety to begin to go into effect on January 1st. This seems to be happening regardless, of course, what's going on with D.C. Let's break it down. Companies with revenues of over 500 million, they would need to create a framework showing how they would manage severe risks for their most powerful models could pose. And severe risks are defined as causing over 1 billion dollars in damage or 50 plus deaths or serious injuries. Now, companies would also need to report events that pose major risks, like a system being hacked, and they would face fines of up to 1 million if they failed to report. Now, the law's implementation, of course, as you mentioned, comes as President Trump signed an executive order meant to discourage state AI laws, arguing that a patchwork of laws is going to limit AI companies' ability to innovate and create new models. And at least one AI company, though, Anthropic, they've already released their compliance framework, which, according to the company, describes how we assess and mitigate cyber offense, chemical, biological, radiological, and nuclear threats, as well as the risk of AI sabotage, loss of control for our frontier models. Again, those are the most powerful models they have. And California's bill, it might be one of the first to go into effect on these frontier models, but it is not the only one. On Friday, New York Governor Kathy Hochul signed an even stricter bill into law. Under that, companies would have only three days to report critical safety incidents, and if they failed to do so, fines would begin at 1 million. Andrew.
Andrew Ross Sorkin (Anchor)01:52.000
How do you see this playing out if you walk through 2026? What do you see as sort of the next permutation this all takes?
Emily Wilkins (Washington Correspondent)02:02.300
I mean, I think the big question right now is that Trump's executive order basically said to the Justice Department that they could sort of go after some of these state AI laws under the idea that they were interfering with interstate commerce. So I think it'll be really interesting to see kind of what the Trump administration does once California's AI law is in effect. Is there a challenge? Is there a lawsuit? What is the outcome of that? And then what does that mean for other states who are considering similar laws? I mean, New York's doesn't go into effect for yet another entire calendar year, so they've got some time to figure out exactly how to implement it. But of course, if you look across state legislatures, there are a bunch of other states they're trying to figure out how they handle some of these risks because again, if you look at Congress, there's no sign at this point that there is any sort of big AI law coming soon.