OpenAI has recognised that GPT-4 could, with "minimal risk", help develop biological weapons
Mariia Shalabaieva/Unsplash
OpenAI conducted a study to assess the risks of using its new GPT-4 model to develop biological weapons. The results showed that access to AI capabilities provides "at least a marginal" increase in the efficiency of information gathering to develop biothreats.
Here's What We Know
Fifty biologists and 50 students participated in the experiment. They were asked to find information about the cultivation and distribution of dangerous substances. Half of the subjects were given access to a special version of GPT-4 without restrictions, the other half used the regular internet.
Comparison of the results showed a slight increase in accuracy and completeness of answers in the group with access to AI. The OpenAI researchers concluded that there was an "at least minor" risk of using the model to collect data on bioweapons development.
Experts and policymakers have previously raised concerns about the potential use of chatbots to develop biological threats. In October, the US president instructed the Department of Energy to ensure that AI systems do not pose chemical and biological risks. The OpenAI team is now working to minimise such risks as AI technology advances.
Source: OpenAI