Listen to the article
AI-Generated Visual Disinformation Creates New Digital Divides
The rapid advancement of artificial intelligence technology is transforming how visual disinformation spreads online, creating disproportionate risks for marginalized communities worldwide, according to recent research.
As generative AI capabilities expand, the creation of photorealistic fake images and videos has become increasingly accessible, with the market projected to grow at a staggering 40% annually, from $16 billion in 2024 to $85 billion by 2029. This technological revolution has troubling implications for social equity in digital spaces.
Visual content has particular power to influence, receiving 94% more views than text-only content and generating significantly higher engagement rates across social platforms. When combined with AI’s capacity to produce convincing synthetic media, this creates a potent vehicle for spreading misinformation.
“Communities already experiencing systemic marginalization face compounded vulnerabilities in the AI-driven information ecosystem,” explains Dr. Vinanda Cinta Cendekia Putri, who synthesized findings from multiple studies. “These vulnerabilities emerge through multiple pathways, from algorithmic targeting to limited access to verification tools.”
The research reveals that AI systems systematically disadvantage specific demographic groups through three primary mechanisms: biased content generation, inequitable distribution algorithms, and differential access to verification tools.
Biases in AI-generated content stem directly from the training data used to develop these systems. Major image generation models demonstrate systematic underrepresentation of non-Western contexts, stereotypical portrayals of racial and ethnic minorities, and skewed gender representations. When prompted with neutral descriptors, these systems disproportionately generate images aligned with dominant cultural stereotypes.
Platform recommendation algorithms further amplify these disparities. Research shows that algorithms disproportionately expose certain demographic groups to misleading visual content, particularly during politically charged periods or health crises. Content reinforcing dominant narratives typically generates higher engagement among majority populations, creating feedback loops that marginalize counter-narratives.
Multiple Dimensions of Vulnerability
The research identifies several intersecting vulnerabilities that create unequal exposure to AI-generated disinformation:
Race and ethnicity play a significant role, as facial recognition systems that often serve as components of verification tools demonstrate significantly higher error rates for individuals with darker skin tones. This “technology performance gap” means minority communities cannot rely equally on AI-powered tools to detect synthetic media.
Socioeconomic status creates fundamental inequities in both exposure to and protection from AI-driven visual disinformation. Low-income populations typically access the internet through mobile devices with limited screen size, constraining their ability to scrutinize image authenticity. These communities also face restricted access to premium fact-checking services and high-quality media literacy education.
Geographic and linguistic barriers particularly affect the Global South, which faces infrastructure limitations and cultural distance from dominant fact-checking ecosystems. Major fact-checking organizations predominantly operate in English and serve Western audiences, leaving non-English-speaking communities with minimal access to authoritative resources.
Digital literacy levels also influence vulnerability, as individuals with lower educational attainment demonstrate reduced confidence in identifying manipulated images and greater reliance on social proof as a credibility indicator.
Content Moderation Disparities
Content moderation systems, increasingly powered by AI, show systematic disparities in protecting different demographic groups from harmful disinformation. Automated moderation tools frequently misclassify content from minority communities due to biases in training data, leading to both over-moderation of legitimate speech and under-moderation of harmful content.
Analysis of platform enforcement reveals that AI-generated disinformation targeting marginalized communities receives slower removal compared to similar content affecting majority populations. This creates what researchers term “algorithmic redlining,” wherein marginalized groups receive inferior information quality through systematically biased content curation.
Toward Solutions
Addressing these inequities requires comprehensive approaches that span technical, educational, regulatory, and community-centered strategies.
Mandatory algorithmic impact assessments should evaluate how AI systems affect different demographic groups, with particular attention to those facing multiple disadvantages. Platform accountability mechanisms must incorporate meaningful participation from affected communities in designing content moderation systems.
Traditional media literacy interventions show limited effectiveness, as they typically assume Western cultural contexts, high educational attainment, and extensive digital access. Effective interventions must be co-designed with target communities, incorporating local knowledge systems and leveraging existing community networks.
“Only through sustained commitment to centering the experiences and needs of the most vulnerable populations can we build AI systems and information ecosystems that advance rather than undermine digital equity,” Dr. Putri concludes.
As generative AI technologies continue to evolve, the research highlights the urgent need for equity-centered approaches to governance, detection, and literacy that protect all communities from the harms of visual disinformation.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


11 Comments
The finding that visual content has a much higher influence than text alone is troubling in the context of AI-generated disinformation. This underscores the need for robust media literacy programs to help people critically assess online content.
Agreed. Equipping people, especially vulnerable populations, with the skills to discern authentic from synthetic media is crucial. Digital literacy must be a key part of the solution.
This research underscores the need for proactive measures to mitigate the risks of AI-generated disinformation, particularly for marginalized communities. Policymakers, tech companies, and civil society must collaborate to develop effective solutions that promote digital inclusion and resilience.
The growth of the AI-generated visual disinformation market is staggering. With synthetic media becoming increasingly convincing, the potential for abuse is alarming. Policymakers will need to act swiftly to mitigate these risks and protect vulnerable populations.
Absolutely. The scale and pace of this technological shift is deeply concerning. Proactive and well-designed interventions will be essential to ensure marginalized groups aren’t further marginalized in the digital world.
This is an alarming trend that requires urgent attention. The rapid growth of the AI-generated disinformation market poses a serious threat to digital equity and the ability of marginalized communities to access reliable information. Policymakers must act now.
This is a concerning development. AI-generated disinformation could exacerbate existing digital divides and make it harder for marginalized groups to access reliable information online. Careful regulation and digital literacy initiatives will be crucial to address these challenges.
Agreed. Marginalized communities are already disadvantaged in digital spaces – this AI-driven disinformation makes the situation even more precarious. Robust solutions are needed to safeguard digital equity.
This research highlights the compounding challenges marginalized communities face in navigating the AI-driven information ecosystem. Disproportionate vulnerabilities require tailored solutions that empower these groups and promote digital inclusivity.
The disproportionate impact of AI-driven disinformation on marginalized groups is deeply concerning. Addressing this challenge will require a multifaceted approach, including technological safeguards, media literacy programs, and targeted support for vulnerable communities.
Well said. A comprehensive strategy that tackles the problem from multiple angles is essential. Leaving any gaps will allow the problem to persist and worsen, further entrenching digital inequities.