OpenAI has formed a group to assess catastrophic risks from AI
Mariia Shalabaieva/Unsplash
OpenAI announced the creation of a new team dedicated to mitigating potentially dangerous risks associated with artificial intelligence.
Here's What We Know
The preparedness team will monitor, assess and prevent serious problems that AI can cause. These include nuclear threats, biological weapons, human deception, and cyberattacks.
OpenAI notes that the new team will develop risk-based AI development policies. Aleksander Madry, an MIT expert in machine learning, has been appointed to lead the team.
According to OpenAI, advanced AI models can bring benefits but also carry increasing risks. The company's CEO Sam Altman has previously warned of the potential catastrophic consequences of AI and urged governments to take the technology as seriously as nuclear weapons.
Source: OpenAI