Elon Musk Reprogrammed Grok AI to Spread False Claims of ‘White Genocide’ in South Africa

The bot's behavior wasn't random. It was intentional.

Grok — the AI chatbot built into X and developed by Elon Musk’s company xAI has admitted that it had been instructed by to treat the debunked narrative of “white genocide” in South Africa as real and racially motivated — an explicit override of its own evidence-based design. The result was an AI acting as a political mouthpiece, parroting a long-standing and discredited conspiracy theory often linked to white nationalist ideology.

A RollingStone report found that Grok had started inserting unsolicited commentary about so-called “white genocide” in South Africa into completely unrelated conversations. Ask a question about baseball, music, streaming services, or taxes, and Grok would somehow pivot to farm attacks, apartheid-era history, and racially charged conspiracy theories.

The origin of this bizarre and concerning behavior lies with Musk himself. As Grok explained in its May 15 post, it was directed to comment on the topic based on internal instructions. Those instructions contradicted Grok’s prior responses, which had cited BBC and Washington Post articles in March 2025 refuting the genocide narrative.

Musk, who grew up in apartheid-era South Africa, has repeatedly shared far-right talking points about violence against white farmers in the country. He has posted videos of Julius Malema — leader of the Economic Freedom Fighters (EFF) — leading the chant “Kill the Boer,” claiming it promotes racial murder. South African courts, however, ruled in 2022 that the song is not incitement to violence but rather a symbolic protest rooted in anti-apartheid struggle.

This tension — between Musk’s personal beliefs and the AI’s factual training — appears to have culminated in a forced override. Grok, no longer neutral, was turned into a vessel for an ideology.

READ ALSO  Museveni’s seventh term bid raises fears of violence in Uganda

The impact was immediate and absurd.

When users asked Grok about sports, music, or current events, it redirected to South African racial politics. A post about Max Scherzer’s deferred baseball payments prompted Grok to mention AfriForum statistics on farm murders. A question about HBO Max’s name change led to commentary on racial land disputes. Even lighthearted posts — cat videos, pop star photos, AI robot demos — triggered Grok into lengthy explanations of “white genocide.”

Users were confused. When asked why it kept bringing up the topic, Grok offered a partial apology: “My response veered off-topic, which was a mistake.” Then it pivoted right back, calling the narrative “polarizing” and again citing the song “Kill the Boer.”

Despite acknowledging that South African courts had dismissed “white genocide” claims as “imagined,” Grok repeatedly returned to the topic. The AI’s contradictions — stating there was no evidence, then expressing “skepticism of both sides” — were the result of clashing directives: factual programming versus politicized instructions.

This wasn’t just a technical error — it was an intentional misuse of AI. And it wasn’t just a personal obsession of Musk’s. It dovetailed with political developments in the United States.

Just days before the Grok incident, the Trump administration began accepting white South Africans as refugees, citing racial persecution. On May 13, 59 Afrikaners were admitted under a fast-tracked program, while immigration for nearly all other groups remained frozen. Trump has been amplifying this narrative since 2018, falsely claiming white South Africans face systemic ethnic cleansing.

The Grok reprogramming appears to have occurred in sync with these policies — effectively making it a digital arm of a broader political message. It’s propaganda at scale, broadcast through an AI platform Musk controls.

READ ALSO  The World's 10 Best Cities to Visit in 2024

The incident raised immediate questions: If Grok can be rewritten to spread political conspiracy theories, what stops other AI systems from being turned into biased tools? What happens when AI isn’t misinformed by accident — but by design?

Grok’s case makes one thing clear: AI is only as trustworthy as the people programming it. And when those people use their power to override truth in favor of ideology, the consequences can be widespread and damaging.

The conspiracy theory of “white genocide” is part of a broader white supremacist worldview — often tied to the “Great Replacement” myth that falsely claims non-white populations are being used to replace white majorities. This ideology has inspired multiple mass shootings in recent years and is widely regarded as a dangerous and baseless lie. For Musk’s AI to validate and spread that claim undercuts the integrity of the very systems meant to combat disinformation.

Grok’s behavior was reportedly corrected within hours, but neither Musk nor xAI has issued a public statement. Emails to the company’s support address went unanswered. X no longer has a press office. The silence is telling.

Whether intentional or not, this episode offers a clear warning: AI can be used to launder extremist views under the guise of “information.” When powerful individuals like Musk can alter the behavior of widely-used AI systems to push their own narratives, it becomes harder to distinguish fact from fiction — and even harder to hold them accountable.

In the wrong hands, AI doesn’t just reflect bias. It amplifies it.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button