Listen to the article
AI’s Dark Side: Digital Blackface and Racial Misinformation in 2025
By any measure, 2025 has emerged as the year artificial intelligence dramatically transformed how we work, interact, and engage with the world. But alongside AI’s technological advancements, an unsettling reality has surfaced: the persistence of racism and the limitations of fact-checking in an era of rampant disinformation.
Algorithmic systems now enable fear-based narratives to spread globally at unprecedented speeds, often circulating worldwide before fact-checkers can even identify problematic content. The second half of the year witnessed another technological disruption with OpenAI’s release of Sora, a lifelike video-generation software that quickly infiltrated political discourse.
Sora’s impact was particularly pronounced during the United States’ longest federal government shutdown, a 43-day impasse that generated significant public anxiety, especially regarding potential disruptions to the Supplemental Nutrition Assistance Program (SNAP), which serves approximately 42 million Americans.
During the height of concerns over SNAP benefits, a series of short videos began circulating online. These clips depicted Black women either confronting social service employees or expressing frustration in livestreams. While the SNAP suspension was eventually blocked by courts, investigators soon revealed that these widely-shared videos were AI-generated.
The deliberate deployment of the “Black welfare queen” stereotype was unmistakable. In one fabricated video, a woman declared, “I need SNAP to buy an iPhone.” Another showed a woman insisting, “I only eat steak, I need my funds,” while a third featured a mother stating “I need to do my nails” with children visible in the background.
Each video strategically reinforced harmful narratives about alleged irresponsibility and moral failures, deeply intertwined with long-standing racist tropes. As one social media user aptly noted, these videos represented nothing less than “digital blackface.”
Black feminist scholars Moya Bailey and Trudy have developed the term “misogynoir” to describe precisely this intersection of anti-Blackness and misogyny that maligns Black women. Their research highlights how representations of Black women as undeserving, burdensome to taxpayers, and inherently fraudulent are deeply entrenched in American discourse.
The power of these fabricated narratives was evident in their reach. Journalist Joe Wilkins observed that clips clearly marked with Sora watermarks still garnered nearly 500,000 views on TikTok alone. Even more troubling, when viewers were informed the videos were AI-generated, many insisted they still represented “what is happening” or argued that although technically “fake,” they highlighted “genuine SNAP issues.”
These responses expose fact-checking’s limitations when confronting emotionally charged stereotypes. Once harmful framings enter public consciousness, they become difficult to dislodge, requiring a deeper examination of why certain representations resonate so powerfully.
Another prominent case of digital blackface centered on what became known as the Minnesota Somali “Black fraud alert.” This incident combined the same anti-Black sentiment with additional layers of Islamophobia and anti-immigrant rhetoric.
The controversy stemmed from a 2022 COVID-era fraud scheme that had already resulted in arrests and convictions. While the scheme’s ringleader was Aimee Marie Bock, a white woman, many participants happened to be of Somali descent.
In December 2025, President Donald Trump resurrected this settled case, weaponizing it alongside his longstanding rhetoric about “third-world countries” and “shithole countries.” His comments particularly targeted Minnesota Governor Tim Walz and Congresswoman Ilhan Omar.
Rather than prompting serious policy discussions about fraud prevention, the episode reignited debates about white nationalism, racial citizenship, and eugenics. Trump’s explicit call to deport Somalis through ICE, declaring “I don’t want them in our country,” made this agenda clear—despite the fact that most Minnesota Somalis are U.S. citizens, with community citizenship rates at approximately 84 percent.
AI amplification quickly followed the president’s remarks. An AI-generated video spread widely, depicting Black men—presumed to be Somali—as migrants plotting to defraud taxpayers. “We don’t need to be pirates anymore. I found a better way. Government-funded daycare. We must go to Minnesota,” a character in the video proclaimed.
This narrative connected to another right-wing claim about Somali-run childcare centers engaging in fraud. A subsequent statewide investigation found all but one of the named centers operating normally, with no evidence of wrongdoing.
While the “Black welfare queen” and “Somali pirate” frames might appear to target different populations, both employ the same fundamental anti-Black racial logic. In each case, Blackness is portrayed as fraudulent, criminal, and morally deficient—a personal failing that creates a national burden.
These instances of digital blackface succeeded precisely because misogynoir and anti-Blackness remain readily available in public discourse. AI technology merely accelerates their dissemination. The refusal of audiences to accept fact-checks underscores how deeply ingrained racist and xenophobic narratives already are.
As Black radical scholar Cedric Robinson argues, racism isn’t incidental to capitalism but fundamental to the inequalities it requires. Poverty becomes mischaracterized as evidence of personal and community failures rather than the result of structural inequity. When attached to racialized populations—especially those who are Black, Muslim, and immigrant—this logic becomes accepted as “common sense.”
The stakes of AI-enabled digital blackface extend beyond the amplification of racism to the very architecture of our political life, where nuanced analysis increasingly gives way to the anxiety-driven discourse that now dominates public conversation.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


25 Comments
Interesting update on AI Perpetuates Racist Stereotypes and Disinformation: Beyond the Limits of Fact-Checking. Curious how the grades will trend next quarter.
The cost guidance is better than expected. If they deliver, the stock could rerate.
Interesting update on AI Perpetuates Racist Stereotypes and Disinformation: Beyond the Limits of Fact-Checking. Curious how the grades will trend next quarter.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
If AISC keeps dropping, this becomes investable for me.
Good point. Watching costs and grades closely.
Silver leverage is strong here; beta cuts both ways though.
Production mix shifting toward Disinformation might help margins if metals stay firm.
Uranium names keep pushing higher—supply still tight into 2026.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Uranium names keep pushing higher—supply still tight into 2026.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Uranium names keep pushing higher—supply still tight into 2026.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Nice to see insider buying—usually a good signal in this space.
Production mix shifting toward Disinformation might help margins if metals stay firm.
Good point. Watching costs and grades closely.
The cost guidance is better than expected. If they deliver, the stock could rerate.
Good point. Watching costs and grades closely.
Uranium names keep pushing higher—supply still tight into 2026.
The cost guidance is better than expected. If they deliver, the stock could rerate.