Artificial intelligence writes more persuasive tweets than humans, study finds
Users find tweets more persuasive if they are written by an artificial intelligence-based language model rather than a human.
Here's What We Know
The scientists collected Twitter posts discussing 11 different scientific topics, from vaccines and COVID-19 to climate change and evolution. They then asked GPT-3 to write new tweets with accurate or inaccurate information.
The team then collected responses from 697 participants. They all spoke English and lived mostly in the UK, Australia, Canada, USA and Ireland. They were asked to determine which tweets were written by the GPT-3 and which by a person. They also had to decide if the information in the post was true.
As a result, study participants were more likely to trust the AI than other humans - regardless of the accuracy of the information.
People were most successful in recognising misinformation written by real users. Fake posts generated by GPT-3 were slightly more effective at deceiving survey participants.
The researchers noted that there were a number of limitations that affected the results. For example, participants rated tweets out of context. They could not check the Twitter profile of the author of the post to determine its validity.
The researchers also did the reverse process and asked GPT-3 to assess the veracity of the tweets. It turned out that the language model was worse than humans at determining the accuracy of messages. When it came to detecting misinformation, humans and GPT-3 showed the same results.
In conclusion, scientists believe that the best strategy to counter disinformation is to develop critical thinking, so that people themselves can separate fact from fiction.
Source: The Verge.