OpenAI launches Red Teaming Network group to improve the reliability of its models
OpenAI has announced the launch of the Red Teaming Network programme, a group of experts who will assess and mitigate risk in artificial intelligence models.
Here's What We Know
According to OpenAI, the group will help identify biases in models like DALL-E 2 and GPT-4, as well as find clues that cause AI to ignore security filters.
Previously, the company has also brought in third-party testers, but this work is now formalised to "deepen and broaden" the checks with experts and organisations.
Network participants will be able to interact with each other on common AI testing issues. That said, not everyone involved in the project will be working on new OpenAI products.
The company is also inviting participation from experts in a variety of fields, including linguistics, biometrics, finance, and healthcare. Prior experience with AI systems is not required, but the non-disclosure agreement could affect other research, OpenAI warned.
Source: OpenAI