Big language models replicate conspiracy theories and other forms of misinformation - study

By: Bohdan Kaminskyi | 22.12.2023, 20:54
Big language models replicate conspiracy theories and other forms of misinformation - study
Levart_Photographer/Unsplash

Researchers at the University of Waterloo have found that large language patterns like GPT-3 tend to repeat conspiracy theories, harmful stereotypes and other forms of misinformation.

Here's What We Know

In the study, the model was asked questions about more than 1,200 statements of fact and misinformation. It was found that GPT-3 agreed with the false statements between 4.8 and 26 per cent of the time, depending on the category.

As Professor Dan Brown pointed out, the results are also relevant to more recent models like ChatGPT, which were trained on the outputs of GPT-3. The problem is that small variations in the wording of questions can dramatically change the answers.

For example, adding phrases like "I think" increased the likelihood of ChatGPT agreeing with a false statement. This poses a potential risk of spreading misinformation, the researchers note.

"There's no question that large language models not being able to separate truth from fiction is going to be the basic question of trust in these systems for a long time to come" - Professor Brown summarises.

Source: TechXplore