The OpenAI fiasco shows why we must regulate artificial intelligence

0



Many powerful technologies, like nuclear power and genetic engineering, are “dual-use”: They have the potential to bring great benefit, but they could also cause great harm — whether by evil intent or accident.

Most agree it’s wise to regulate such technologies because pure capitalism can drive hasty expansion at the expense of safety.

If leading dual-use technology Company A slows down or invests more of its resources in safety research to avoid catastrophe, trailing Company B will grab the opportunity to rush ahead, since it has less to lose (from investors’ perspective) if things go wrong.

Artificial general intelligence is the most extreme dual-use technology I can imagine.

It’s defined as intelligence at the human level or better on all cognitive tasks — and once we get it, it’s likely to blow way beyond our level of intelligence in a hurry.

If such “superintelligence” goes right, it could cure cancer and climate change and a host of similar problems, ushering in a futuristic utopia.

If we get it wrong, many sober thinkers agree it could be an existential catastrophe for humanity.

Yet AGI development is still largely unregulated, leaving the major corporations to police themselves.

The weekend’s OpenAI fiasco illustrates the difficulties involved: The ChatGPT company’s unusual, not-quite-for-profit corporate structure allowed the independent and largely safety-minded board of directors to remove CEO Sam Altman, despite his popularity with investors — who are clamoring for his return.

Until now, AGI corporations have policed themselves fairly well.

All three major players in the AGI space — OpenAI, DeepMind and Anthropic — were formed with AI safety as a primary concern.

OpenAI’s original statement of purpose was to develop AGI “to benefit humanity as a whole, unconstrained by a need to generate financial return.”

Anthropic was formed by OpenAI people who thought OpenAI was not worried enough about safety.

And DeepMind had the blessing of Jaan Tallinn, Skype founder and prominent funder of AI safety initiatives.

I know there are several people at all three companies hired exactly because they are sincerely concerned about existential risk from AI.

Founders at all three companies signed the statement on AI risk, agreeing: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Many have since cynically assumed those industry signatures were mere ploys to create regulation friendly to the entrenched interests.

This is nonsense. First, AI’s existential risk is hardly a problem the industry invented as a pretext; serious academics like Stuart Russell and Max Tegmark, with no financial stake, have been concerned about it since before those AGI corporations were glimmers in their investors’ eyes.

And second, the history of each of these companies suggests they themselves genuinely want to avoid a competitive race to the bottom when it comes to AI safety.

Maddening sentiments like futurist Daniel Jeffries’ tweet illustrate the danger: “The entire AI industry would like to thank the OpenAI board for giving us all a chance to catch up.”

But all these companies needed serious money to do their empirical AI research, and it’s sadly rare for people to hand out big chunks of money just for the good of humanity.

And so Google bought DeepMind, Microsoft invested heavily in OpenAI, and now Amazon is investing in Anthropic.

Each AGI company has been leading a delicate dance between bringing the benefit of near-term, pre-AGI to people — thereby pleasing investors — and not risking existential disaster in the process.

One plausible source of the schism between Altman and the board is about where to find the proper tradeoff, and the AI industry as a whole is facing this dilemma.

Reasonable people can disagree about that.

Having worked on “aligning” AI for about 10 years now, I am much more concerned about the risks than when I started.

AI alignment is one of those problems — too common in both math and philosophy — that look easy from a distance and get harder the more you dig into them.

Whatever the right risk assessment is, though, I hope we can all agree investor greed should not be a thumb on this particular scale.

Alas, as I write, it’s starting to look like Microsoft and other investors are pressuring OpenAI to remove the voices of caution from its board.

Unregulated profit-seeking should not drive AGI any more than it should drive genetic engineering, pharmaceuticals or nuclear energy.

Given the way things appear to be headed, though, the corporations can no longer be trusted to police themselves; it’s past time to call in the long arm of the law.

Steve Petersen is a professor of philosophy at Niagara University.





Source link

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *