OpenAI creates a new team to tackle ‘superintelligent’ AI systems
OpenAI says it’s planning to create a team to help manage the risks that could come from a superintelligent AI system that it expects to arrive within the decade.
The company behind the popular artificial intelligence (AI) chatbot ChatGPT says it will be forming a team to rein in and manage the risks of superintelligent AI systems.
In a July 5 announcement on its blog, OpenAI said the new team will be created to “steer and control AI systems much smarter than us.”
The nonprofit said it believes superintelligence will be “the most impactful technology humanity has ever invented” and help solve many problems — though there are risks.
“The vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.”
The company said it believes superintelligence could arrive this decade.
It said it would dedicate 20% of the already secured compute power to the effort and aims to create a “human-level” automated alignment researcher. The automated researcher would, in theory, help the team manage superintelligence safety and align it with “human intent.”
It named its chief scientist Ilya Sutskever and its research lab’s head of alignment, Jan Leike, as co-leaders of the effort. It made an open call to machine learning researchers and engineers to join the team.
Related: OpenAI pauses ChatGPT’s Bing feature, as users were jumping paywalls
This announcement from OpenAI comes as governments around the world consider measures to control the development, deployment and use of AI systems.
Regulators in the European Union have made the most progress with AI regulations. On June 14, the European Parliament passed the EU AI Act, which would make it mandatory for tools like ChatGPT to disclose all AI-generated content, along with other measures.
The bill requires additional discussion before its implementation. Nonetheless, it sparked an outcry from AI developers regarding its potential impact on innovation.
In May, OpenAI CEO Sam Altman went to Brussels to speak with EU regulators about the potentially negative effects of over-regulation.
Lawmakers in the United States have also introduced the National AI Commission Act to establish a body to decide on the nation’s approach to AI. Regulators in the U.S. have also been outspoken in their desires to regulate the technology.
On June 30, Senator Michael Bennet drafted a letter to major tech companies, including OpenAI, urging the labeling of AI-generated content.
Magazine: BitCulture: Fine art on Solana, AI music, podcast + book reviews