close
close
migores1

AI ‘The Godfather’: New OpenAI model needs ‘much stronger safety tests’

OpenAI’s new o1 model is better at scheming — and that makes the AI ​​”godfather” nervous.

Yoshua Bengio, a Turing Award-winning Canadian computer scientist and professor at the University of Montreal, told Business Insider in an emailed statement that o1 has a “far superior ability to reason than its predecessors.”

“In general, the ability to cheat is very dangerous, and we should have much stronger safety tests to assess that risk and its consequences in the case of O1,” Bengio wrote in the statement.

Bengio earned the nickname “the godfather of artificial intelligence” for his award-winning research on machine learning with Geoffrey Hinton and Yann LeCun.

OpenAI launched its new o1 model – which is designed to think more like humans – earlier this month. So far he has kept details of his “learning” process close to his chest. Researchers at independent AI firm Apollo Research found that the o1 model is better at lying than previous AI models from OpenAI.

Bengio expressed concern about the rapid development of artificial intelligence and advocated for legislative safety measures like SB 1047 in California. The new law, which has passed the California legislature and awaits Gov. Gavin Newsom’s signature, would impose a number of safety constraints on powerful AI. models, such as forcing AI companies in California to allow third-party testing.

Newsom, however, expressed concern about SB 1047, which he said could have a “chilling effect” on the industry.

Bengio told BI that there are “good reasons to believe” that AI models could develop stronger scheming abilities, such as deliberate and discreet deception, and that we need to take action now to “prevent the loss of human control” in the future.

OpenAI said in a statement to Business Insider that the o1 preview is safe under the “Preparedness Framework” — the company’s method of tracking and preventing AI from creating “catastrophic” events — and is rated medium risk on its “cautious scale.”

According to Bengio, humanity needs to be more confident that AI will “behave as intended” before researchers attempt to make further significant leaps in reasoning ability.

“It’s something scientists don’t know how to do today,” Bengio said in his statement. “This is why regulatory oversight is needed right now.”

Related Articles

Back to top button