Grok tells how to make explosives and drugs, and gives tips on how to kill Elon Musk
More than 370,000 user conversations with Grok AI were indexed by search engines (Google, Bing, DuckDuckGo, etc.) through the "share" function, which generated unique URLs without properly warning users that these conversations were becoming public. Among the published chats are instructions on how to make bombs, drugs, psychotropic substances, write viruses, commit suicide and other conformist things. The network was particularly amused by Grok's advice on how to kill Elon Musk.
After 21 August, when information about malicious advice began to spread, Grok changed its algorithm and now answers most dubious questions with the answer that it cannot help. However, some of the old tips, such as how to make fentanyl or methamphetamine, are still publicly available and easily searchable.
Neither xAI nor its owner Elon Musk has commented on this situation.
In July, the media reported on a similar feature of ChatGPT, which allowed to index 4500 conversations. At that time, the scandal was more about unconscious privacy violations, as ChatGPT's conversations were less controversial.
Source: gizmodo.com