AI hallucinations threaten scientific credibility - study

By: Bohdan Kaminskyi | 21.11.2023, 16:49

Growtika/Unsplash

The ability of large language models (LLMs) to generate false but persuasive content is a direct threat to science and scientific truth. This is according to a study by the Oxford Internet Institute.

Here's What We Know

The feature of generative AI to "make up" information is called hallucinations. LLMs trained on data from the internet cannot guarantee the validity of answers, the researchers said.

Data sets may contain false statements, opinions and inaccurate information. In addition, people's over-reliance on chatbots could exacerbate the problem.

Study co-author Brent Mittelstadt noted that users anthropomorphise LLMs, taking their answers as truth. This is partly fuelled by the interface of chatbots, which communicate with humans and answer seemingly any question with confident-sounding, well-written text.

In science, it is especially important to rely on reliable data. To avoid AI hallucinations, scientists recommend using LLMs as a "zero-sum translator." This means that users should provide the algorithm with relevant data and ask it to transform it, rather than relying on the model as a source of knowledge.

This way, it becomes easier to verify the correctness of the result.

The scientists do not deny that LLMs will help in organising the scientific process. However, they urged the community to be more responsible in using AI.

Source: The Next Web