California just made history as the first state to require AI safety transparency from the biggest labs in the industry. Governor Newsom signed SB 53 into law this week, mandating that AI giants like OpenAI and Anthropic disclose, and stick to, their safety protocols. The decision is already sparking debate about whether other states will follow suit.
Adam Billen, vice president of public policy at Encode AI, joined Equity to break down what this new law actually means and why it managed to pass its predecessor SB 1047 incurred so much ire from tech companies that Newsom ended up vetoing it last year.
Listen to the full episode to hear about:
- What “transparency without liability” means in practice, and whether it’s enough to ensure safe AI is released to the masses.
- Whistleblower protections and critical safety incident reporting requirements.
- What’s still on Newsom’s desk, including regulation on AI companion chatbots.
- Why SB 53 is an example of light-touch state policy that doesn’t hinder AI progress.
- The battle for federalism amid moves to take away states’ rights to enact AI regulation.
Equity will be back Friday with our weekly news roundup, so don’t miss it.
Equity is TechCrunch’s flagship podcast, produced by Theresa Loconsolo, and posts every Wednesday and Friday.
Subscribe to us on Apple Podcasts,Overcast, Spotify and all the casts. You also can follow Equity on X and Threads, at @EquityPod.