OpenAI tightens controls on the development of potentially dangerous AI models

By: Bohdan Kaminskyi | 19.12.2023, 14:43
OpenAI tightens controls on the development of potentially dangerous AI models
Mariia Shalabaieva/Unsplash

OpenAI, the developer of the popular chatbot ChatGPT, is implementing additional controls to reduce the risk of creating malicious artificial intelligence models.

Here's What We Know

In particular, OpenAI is creating a special security advisory group. It will assess the risks from developed models in several areas: cybersecurity, misinformation, autonomy, and others.

The group will be able to make recommendations to the company's top management on banning or restricting certain projects. In addition, OpenAI's board of directors will have the right to veto decisions of the executive management.

According to the company's statements, these measures will allow to better identify potential threats at the early stages of development. However, experts note that the real effectiveness of the system depends on the policy of the OpenAI management itself.

Previously, supporters of limiting risky AI projects were excluded from the board of directors. And the new board members are known for their rather commercial stance. To what extent they will be inclined to use the veto right in practice is unknown.

Source: TechCrunch