Artificial intelligence trained on the toxic board 4chan and began to insult users
YouTube- blogger Yannick Kilcher created a bot and trained it with a toxic board Politically Incorrect /pol/ on 4chan.
Artificial intelligence has analyzed several billion offensive threads and has become a real hate speech machine. The blogger introduced the model into 10 bots and sent them to write offensive messages.
During the day, bots created 15,000 posts with racist and anti-Semitic texts, which amounted to about 10% of all posts that appeared on the board that day. Model GPT-fourchan showed herself well. Despite the creation of empty posts and some minor errors, not all users were able to recognize something was wrong. The AI was notable for creating a toxic atmosphere, and users continued to insult each other even after the end of the experiment.
This week an #AI model was released on @huggingface that produces harmful + discriminatory text and has already posted over 30k vile comments online (says it's author).
— Lauren Oakden-Rayner (Dr.Dr. ????) (@DrLaurenOR) June 6, 2022
This experiment would never pass a human research #ethics board. Here are my recommendations.
1/7 https://t.co/tJCegPcFan pic.twitter.com/Mj7WEy2qHl
Yannick Kilcher posting a copy of his program on Hugging face, but it had to be blocked. AI researchers have found that GPT-fourchan is an unethical experiment, not a harmless joke.
Source: Vice