The States Leading the Way on Regulating AI
Andreessen and Altman both opposed California’s efforts to regulate AI. Altman’s company, OpenAI, even subpoenaed policy wonk Nathan Calvin, requesting “all documents concerning SB 53 or its potential impact on OpenAI.” Calvin is a lawyer at the AI think tank Encode, where he worked on S.B. 53. He told The New Republic that the bill represents “the first time that we’ve seen any jurisdiction in the United States say very clearly, ‘We think that catastrophic risk from the most advanced AI models is worth taking seriously, and we should take affirmative steps to have companies guard against that and have government prepare for that.’”
Now that the law is on the books, OpenAI has asked Governor Newsom to be considered compliant with state requirements because it signed an AI code of conduct in the European Union. Adherence to the EU code is voluntary.
Weiner spent years working on the law that became S.B. 53, first passing a bill known as S.B. 1047, which would have instituted more guardrails on AI companies, requiring third-party audits and a kill switch on AI models. OpenAI lobbied against S.B. 1047. Google lobbied against it. Meta lobbied against it. Andreessen Horowitz lobbied against it. Eight members of the California congressional delegation—Lofgren, Eshoo, Khanna, Cardenas, Correa Barrigan, Bera, and Peters—sent a letter to Gavin Newsom asking him to veto 1047, complaining that the bill was “skewed toward addressing extreme misuse scenarios and hypothetical existential risks while largely ignoring demonstrable AI risks like misinformation, discrimination, nonconsensual deepfakes, environmental impacts and workplace displacement.”