Does AI need its own regulatory airbags? Biden says yes, and so should California

Saturday, May 20, 2023

 BY BILL DODD -- Sacramento Bee

In the history of mankind, no event looms larger in the furtherance of our sense of personal independence than the advent of the car.

Henry Ford’s Model T rattled onto the scene in 1908, providing the first inexpensive alternative to the horse and buggy and forever altering our daily lives. With the honking hordes came a new phenomenon: fatal car crashes. The government responded by setting safety standards, and all but 12 states had adopted speed limits by the time the last Tin Lizzie rolled off the assembly line in 1927. Mandatory seat belts and padded dashboards followed.

A century later, we are at the dawn of the next big thing. Like the car, artificial intelligence promises to reshape human experience, vastly improving how we work, play and even operate our vehicles. But, just as the car did before it, this new innovation presents new risks. Left unchecked, artificial intelligence can threaten privacy, exposing personal information to those who would do us harm. Its algorithms are created by people, meaning these products contain built-in biases. Either intentionally or inadvertently, they can discriminate against others, denying loans or marketing products to people along racial or gender lines, often at increased costs. AI-automated machines can also displace workers, destabilizing communities and triggering a seismic shift in the economy.

It all begs the question: Does AI need its own regulatory airbags?

President Joe Biden says yes. Last year, Biden released his blueprint for an AI Bill of Rights, laying out voluntary guidelines tech companies can follow to prevent misuse or abuse. His five core principals target things such as discrimination and the malicious deployment of AI. It insists that consumers have a right to know when they are interacting with a machine and have a chance to opt out of having their information collected or sold.

In California, the headquarters of much of this innovation, we echo the president’s concerns and propose our own state-level response. Among the proposals — made with considerable industry input — is my Senate Bill 313, which creates the Office of Artificial Intelligence to oversee this burgeoning new subject area. Provisions of my “AI-Ware Act” bill include a requirement that any state agency deploying AI in programs or services notify the public in a conspicuous way. This will become most important when people communicate online with chat bots in state departments such as the Department of Motor Vehicles. By keeping people aware when AI is being used, we can create a future where AI systems are designed, deployed and governed responsibly, prioritizing the well-being of all people. SB 313 is already gaining traction. In its first Senate committee test, the bill was approved with overwhelming, bipartisan support.

  My bill is one of several AI-related proposals introduced this year in the Legislature. Others, like Assembly Bill 302 by Assembly member Chris Ward, D-San Diego, require the Department of Technology to conduct an inventory of all automated decision systems in use by the state. SB 721 by Sen. Josh Becker, D-San Mateo, requires a report on AI systems to lawmakers, while AB 331 by Assembly member Rebecca Bauer-Kahan, D-Orinda, requires companies to produce impact statements, disclosing any negative effects of their AI-systems.

AI can do a lot and it certainly offers significant benefits; my office used it this year to write the first machine-generated legislative resolution. But AI can’t be expected to govern itself. California leads the nation in innovation. It makes sense that we’re also pioneers in safety.

After all, the Golden State was among the first to set a maximum speed limit for cars of 35 mph back in 1913. It’s time to tap the brakes again — this time on AI.

Senator Bill Dodd represents the 3rd Senate District, which includes all or portions of Napa, Solano, Yolo, Sonoma, Contra Costa and Sacramento counties.