Today, AI is more advanced than ever. With great power, though, comes great responsibility. Policymakers, alongside those in civil society and industry, are looking to governance that minimizes potential harm and shapes a safe, human-centered AI-empowered society. I applaud some of these efforts yet caution against others; California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, better known as SB-1047, falls into the latter category. This well-meaning piece of legislation will have significant unintended consequences, not just for California, but for the entire country.
AI policy must encourage innovation, set appropriate restrictions, and mitigate the implications of those restrictions. Policy that doesn’t will at best fall short of its goals, and at worst lead to dire, if unintended, consequences.
If passed into law, SB-1047 will harm our budding AI ecosystem, especially the parts of it that are already at a disadvantage to today’s tech giants: the public sector, academia, and “little tech.” SB-1047 will unnecessarily penalize developers, stifle our open-source community, and hamstring academic AI research, all while failing to address the very real issues it was authored to solve.
First, SB-1047 will unduly punish developers and stifle innovation. In the event of misuse of an AI model, SB-1047 holds liable the party responsible and the original developer of that model. It is impossible for each AI developer—particularly budding coders and entrepreneurs—to predict every possible use of their model. SB-1047 will force developers to pull back and act defensively—precisely what we’re trying to avoid.
Second, SB-1047 will shackle open-source development. SB-1047 mandates that all models over a certain threshold include a “kill switch,” a mechanism by which the program can be shut down at any time. If developers are concerned that the programs they download and build on will be deleted, they will be much more hesitant to write code and collaborate. This kill switch will devastate the open-source community—the source of countless innovations, not just in AI, but across sectors, ranging from GPS to MRIs to the internet itself.
Third, SB-1047 will cripple public sector and academic AI research. Open-source development is important in the private sector, but vital to academia, which cannot advance without collaboration and access to model data. Take computer science students, who study open-weight AI models. How will we train the next generation of AI leaders if our institutions don’t have access to the proper models and data? A kill switch would even further dampen the efforts of these students and researchers, already at such a data and computation disadvantage compared to Big Tech. SB-1047 will deal a death knell to academic AI when we should be doubling down on public-sector AI investment.
Most alarmingly, this bill does not address the potential harms of AI advancement, including bias and deepfakes. Instead, SB-1047 sets an arbitrary threshold, regulating models that use a certain amount of computing power or cost $100 million to train. Far from providing a safeguard, this measure will merely restrict innovation across sectors, including academia. Today, academic AI models fall beneath this threshold, but if we were to rebalance investment in private and public sector AI, academia would fall under SB-1047’s regulation. Our AI ecosystem will be worse for it.
We must take the opposite approach. In various conversations with President Biden over the past year, I have expressed the need for a “moonshot mentality” to spur our country’s AI education, research, and development. SB-1047, however, is overly and arbitrarily restrictive, and will not only chill California’s AI ecosystem but will also have troubling downstream implications for AI across the nation.
I am not anti-AI governance. Legislation is critical to the safe and effective advancement of AI. But AI policy must empower open-source development, put forward uniform and well-reasoned rules, and build consumer confidence. SB-1047 falls short of those standards. I extend an offer of collaboration to Senator Scott Wiener, the bill’s author: Let us work together to craft AI legislation that will truly build the technology-enabled, human-centered society of tomorrow. Indeed, the future of AI depends on it. The Golden State—as a pioneering entity, and home to our country’s most robust AI ecosystem—is the beating heart of the AI movement; as California goes, so goes the rest of the country.
More must-read commentary published by Fortune:
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.