close
close
migores1

Tom Siebel is against California’s AI safety bill SB 1047

The landmark AI safety bill on California Governor Gavin Newsom’s desk has another detractor in longtime Silicon Valley figure Tom Siebel.

SB 1047, as the bill is known, is among the most sweeping, and therefore polarizing, pieces of AI legislation. The main objective of the bill is to hold large artificial intelligence companies accountable if their models cause catastrophic damage, such as mass casualties, shut down critical infrastructure, or are used to create biological or chemical weapons, according to the bill by law. The bill would apply to AI developers who produce so-called “frontier models,” meaning those that took at least $100 million to develop.

Another key provision is the establishment of a new regulatory body, the Board of Frontier Models, to oversee these AI models. Establishing such a group is pointless, according to Siebel, who is CEO of C3.ai.

“This is just hit,” he said wealth.

Before founding C3.ai (which trades under the ticker symbol $AI), Siebel founded and ran Siebel Systems, a pioneer in CRM software, which he eventually sold to Oracle for $5.8 billion dollars in 2005. (Disclosure: Former Fortune Media CEO Alan Murray is on the C3.ai board).

Other provisions in the bill would create reporting standards for AI developers, requiring them to demonstrate the safety of their models. Firms would also be legally required to include a “kill switch” in all AI models.

In the US, at least five states have passed AI safety laws. California has passed dozens of artificial intelligence bills, five of which were enacted this week alone. Other countries have also struggled to enact anti-AI legislation. Last summer, China published a series of preliminary regulations for generative AI. In March, the EU, long at the forefront of tech regulation, passed a sweeping artificial intelligence law.

Siebel, who has also criticized the EU law, said the California version risks stifling innovation. “We will criminalize science,” he said.

AI models are too complex for ‘government bureaucrats’

A new regulatory agency would slow down AI research because its developers would have to submit their models for review and keep detailed logs of all their training and testing procedures, according to Siebel.

“How long will it take this board of people to evaluate an AI model to determine that it’s going to be safe?” Siebel said. “It will take about forever.”

The complexity of AI models, which are not fully understood even by the researchers and scientists who created them, would prove too much of a burden for a fledgling regulator, Siebel says.

“The idea that we’re going to have these agencies that are going to look at these algorithms and make sure they’re safe, I mean there’s no way,” Siebel said. “The reality is, and I know a lot of people don’t want to admit this, but when you get into deep learning, when you get into neural networks, when you get into generative AI, the truth is, we don’t know how it works.”

A number of AI experts in both academia and business have acknowledged that certain aspects of AI models remain unknown. In an interview with 60 minutes Last April, Google CEO Sundar Pichai described parts of AI models as a “black box” that experts in the field did not “fully understand”.

The Frontier Model Council established in the California bill would be made up of experts in AI, cybersecurity and academic researchers. Siebel did not trust that a government agency would be appropriate to oversee AI.

“If the person who developed this — experienced PhD-level data scientists from the best universities on earth — can’t figure out how it might work,” Siebel said of AI models. “How is this government bureaucrat going to figure out how it works? It is impossible. They are inexplicable.”

Laws are sufficient to regulate AI safety

Instead of establishing the board or any other regulatory body dedicated to AI, the government should rely on new legislation that would be enforced by existing court systems and the Department of Justice, according to Siebel. “The government should pass laws that make it illegal to publish AI models that could facilitate crimes, cause large-scale dangers to human health, interfere with democratic processes, and collect personal information about users,” Siebel said .

“We don’t need new agencies,” Siebel said. “We have a system of jurisprudence in the Western world, whether it is based on French law or British law, which is well established. Give some laws.”

Supporters and critics of SB 1047 do not clearly fall along political lines. Opponents of the bill include both top VCs and outspoken supporters of former President Donald Trump, Marc Andreesen and Ben Horowitz, as well as former House Speaker Nancy Pelosi, whose congressional district includes parts of Silicon Valley. On the other side of the argument is a group of artificial intelligence experts. They include AI pioneers such as Geoffrey Hinton, Yoshua Bengio and Stuart Russell and Tesla CEO Elon Musk, who have all warned of the technology’s high risks.

“For over 20 years, I have been an advocate for regulating AI, just as we regulate any product/technology that poses a potential risk to the public,” Musk wrote on X in August.

Nor was Siebel blind to the dangers of AI. “It can be used to enormously damaging effect. Stop hard,” he said.

Newsom, the man who will decide the final fate of the bill, has remained pretty tight-lipped. Breaking his silence earlier this week, he said he was concerned about the bill’s potential “chilling effect” on AI research during an appearance at Salesforce’s Dreamforce conference.

Asked what parts of the bill could have a chilling effect and to respond to Siebel’s comments, Alex Stack, a spokesman for Newsom, said “this measure will be evaluated on its merits.” Stack did not respond to a follow-up question about the assessed merits.

Newsom has until September 30 to sign the bill.

Related Articles

Back to top button