AI: Open AI CEO Sam Altman participates in a discussion entitled Charting the Path Forward The Future of Artificial Intelligence during the Asia-Pacific Economic Cooperation (APEC) CEO Summit Thursday Nov. 16 2023. (AP Photo Eric Risberg).

Altman’s pledges don’t mean much

The warp-like speed of generative AI development has critics warning of unintended consequences.

Publisert Sist oppdatert

The question is, are we able to control a system if it’s smarter than us?” asked Joep Meindertsma, a Dutch developer and founder of the group PauseAI

AElon Musk’s lawsuit against Sam Altman and OpenAI, filed last week in California state court, accuses the defendants of forgetting core parts of OpenAI’s stated mission to develop useful and non-harmful artificial general intelligence. Altman has since moved to buttress his responsible AI credentials, including the signing of an open letter pledging to develop AI “to improve people’s lives.”

Critics, however, remain unconvinced by Altman’s show of responsibility. Ever since the rapid popularization of generative AI (genAI) over the past year, those critics have been warning that the consequences of unfettered and unregulated AI development could be not just corrosive to human society, but a threat to it entirely.

A "head-fake"

Ritu Jyoti, group vice president for worldwide AI and automation research at IDC, said the move by Altman to publicly embrace responsible development amounts to little more than a head-fake.

“While there is agreement in the industry that there is collective responsibility to develop and deploy AI responsibly, this letter falls short of specific actions needed,” she said. “So, in my opinion, not much value-add.”

Altman is also a signatory to a letter acknowledging the world-altering risks of AI, but critics continue to argue that the self-regulatory nature of efforts to address these risks is insufficient.

The key is in the industry’s failure to solve the alignment problem, which arises when AI tools begin to develop behavior beyond their design specifications. The fear is that the most-advanced AI instances can potentially iterate upon themselves — a serious risk of developing in ways humans don’t want them to.

Controlling the smarter

“The question is, are we able to control a system if it’s smarter than us?” asked Joep Meindertsma, a Dutch developer and founder of the group PauseAI, which is dedicated to mitigating the risks posed by AI.

Meindertsma gave the example of a system like AutoGPT, which can essentially ask itself questions and create its own queries to accomplish complex research tasks as the kind of technology that could prove highly disruptive — and dangerous.

“At some point, someone is going to ask that computer something that would involve the idea that it’s useful to spread to other machines,” he said. “People have literally asked AutoGPT to try and take over the world.”

As for the lawsuit against Altman and OpenAI, Meindertsma said, Musk might have a point. The entire point of founding OpenAI in the first place, he argued, was to guide the development of AI in responsible directions. The current state of high-speed development is quickly outpacing any guardrails industry organizations have created under Altman and OpenAI’s leadership.

The question is, are we able to control a system if it’s smarter than us?

Joep Meindertsma,
Dutch founder of the group PauseAI

The industry, thus, cannot be trusted to regulate itself, and the government must step in to avert potential catastrophe, critics have argued. Meindertsma said capabilities already demonstrated — such as GPT4’s ability to hack websites autonomously — are critically dangerous, and a lack of regulation combined with the fast evolution of genAI is an existential threat.

“We should regulate it in the same way we regulate nuclear material,” Meindertsma said.

OPPOVERTEKST: OpenAI