OpenAI Says Technology’s Rapid Advance Requires Urgent Global Oversight
Artificial intelligence’s next major evolution is close at hand and warrants the formation of a global oversight force similar to the International Atomic Energy Agency, OpenAI CEO Sam Altman says.
“Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI,” Altman ominously said in a blog post co-authored by Open AI president Greg Brockman and chief scientist Ilya Sutskever.
Within 10 years, artificial intelligence systems may exceed experts in many domains and reach productivity levels commensurate with that of entire companies, according to OpenAI. As a result of the existential quandary posed by such an advanced technology, potential safeguards to ensure superintelligence will be helpful, rather than hurtful, to humanity, Altman and his co-authors said.
Actor or AI? Tom Hanks Thinks Audiences May Not Care
The comments were similar to the concerns and sentiments Altman had when testifying last week before Congress about the inherent risks of dealing with artificial intelligence.
In Monday’s post, they outlined three pillars OpenAI saw as essential to smart planning for the future. First, it called for some sort of coordination effort among the world’s leading AI innovators, possibly organized by major governments.
They also said there should be an “international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.” The International Atomic Energy Agency was referenced as an example of how a globe-spanning regulatory body could look when applied to the world of AI and superintelligence.
In addition, it was imperative that there be the “technical capability” to keep superintelligence under control and “safe.” As to what that meant, even OpenAI noted it’s an open-ended question. But technology below a certain threshold (e.g. the bar for superintelligence) should not inherently be subject to burdensome regulatory measures such as licenses and audits.
“This is an open research question that we and others are putting a lot of effort into,” Monday’s post said.
ChatGPT Boss Warns Congress of AI’s Dangers: ‘It Can Go Quite Wrong’