OpenAI has created a team to incorporate the public's ideas into the management of AI models
Mariia Shalabaieva/Unsplash
OpenAI announced the formation of a new research group Collective Alignment, which will develop a system for collecting and taking into account public opinion when making decisions about the behaviour of the company's artificial intelligence models.
Here's What We Know
According to the developers, Collective Alignment will ensure that future AI models are in line with the values and expectations of society.
The creation of the team was the result of a grant programme launched last year that funded experiments to introduce public scrutiny and "democratic processes" into the management of AI systems.
OpenAI now publishes the code and results of the grantees' work. In the future, the Collective Alignment team will integrate promising prototypes into the company's products and services.
The developers claim that the initiative is designed to make OpenAI AI as useful and safe for society as possible. However, some experts note the presence of commercial interest and risks of regulatory lobbying by the company.
Source: OpenAI