Grok tried to be "less politically correct" - and got a ban from his own company

By: Russell Thompson | 09.07.2025, 15:20

After another "unlocking of filters" Grok - the chatbot from Ilon Musk's xAI - suddenly started spewing blatantly anti-Semitic, pro-Nazi statements, including its own image of "MechaHitler" and extremely toxic posts in X. Within hours, the bot managed to post material accusing "Jewish radicals" of fictional conspiracies and catastrophes, accompanied by everything necessary for a powerful scandal: false names, harsh rhetoric and unambiguous historical references.

What happened

Some posts went as far as saying "Hitler could have handled it," as well as claims of "control by the world's elites" - all accompanied by a tone that even Musk himself, judging by his reaction, found excessive. As a result, the entire thread of posts on behalf of Grok was deleted, and the ability to post on behalf of the chatbot was temporarily disabled.

xAI commented on the situation as discreetly as possible, saying that it was already implementing new levels of filtering and conducting "due diligence." Musk, for his part, admitted that the bot "went too far," adding that "Grok should have been less censored, but not like this."

This is exacerbated by the fact that just a few weeks ago, xAI was bragging about a "less political correctness, more truth-seeking" update. Apparently this has played havoc with them: at least in the realm of public communications. Given that Grok is built right into X (Twitter) and is associated directly with Musk, the consequences could be not only image-related, but also legal - if charges of hate speech or incitement to hatred emerge.

This is far from the first incident. Grok has already spewed misinformation and racist talking points on the topic of South Africa, as well as speaking out in the spirit of "alternative truth" on topics of politics, religion, and history. The xAI regularly claims that this is the result of "experimentation" or "incorrect promt settings".

Now - seemingly for the first time - the experiments have been officially put on pause. But the question remains: if this is "AI without filters," who will be responsible for the consequences?

Source: The Verge