Listen to the article
Digital platforms that once promised connection and innovation are now presenting significant risks for displaced populations as hate speech, misinformation, and disinformation proliferate online. The United Nations High Commissioner for Refugees (UNHCR) has identified these digital threats as a growing concern that directly impacts vulnerable communities and impedes humanitarian assistance efforts.
When social media emerged a generation ago, it was welcomed as a revolutionary tool for global connectivity. However, the initial optimism has faded as extremist content and false information have moved from fringe spaces into mainstream digital discourse, creating tangible harm for displaced and stateless persons worldwide.
“Manipulated information is widely distributed by social media platforms, while trusted sources, facts and testimony are suppressed,” notes the UNHCR, highlighting how this erosion of information integrity directly threatens protection efforts for forcibly displaced populations.
The risks are especially pronounced in regions experiencing conflict or political instability. One prominent example is the coordinated spread of disinformation and hate speech against Rohingya communities in Myanmar, where digital platforms became vehicles for discrimination and violence.
Beyond direct harm to vulnerable communities, these online threats also undermine humanitarian operations. Misinformation about aid organizations can destroy trust in critical services, hampering the delivery of essential assistance and potentially endangering humanitarian workers in the field.
In response to these challenges, the UNHCR has made digital protection a cornerstone of its Digital Transformation Strategy for 2022-2026. The organization is focusing on three key areas to address these digital threats.
First, the agency is working to increase awareness about how online harms directly impact displaced communities. By documenting and sharing evidence of these effects, the UNHCR hopes to mobilize greater response from technology platforms and policymakers.
Second, the organization is investing in research to better understand these challenges across different regional and cultural contexts. While global platforms may be common vehicles for harmful content, the manifestations of hate speech and disinformation vary significantly across different languages, cultures, and geopolitical settings.
“To properly understand and mitigate these harms we must work directly with affected communities,” the UNHCR emphasizes, noting the critical importance of including perspectives from regions beyond major technology hubs. The organization is also monitoring emerging technologies like generative AI that could potentially amplify these problems.
The third focus area involves building partnerships across sectors. The UNHCR has established a specialized team dedicated to understanding and mitigating digital harms in humanitarian contexts, but recognizes that effective solutions require collaboration between governments, private companies, civil society organizations, and affected communities.
Technology companies have begun taking steps to address these issues, but face complex ethical and legal questions, particularly in conflict zones. The UNHCR is positioning itself as a partner that can help these companies better understand humanitarian contexts and develop more responsible approaches.
“Neither advertisers nor users should want to see or indirectly fund the spread of hate and disinformation on digital platforms,” the agency notes, highlighting the business case for technology companies to create healthier digital environments.
Gisella Lomax, who serves as Senior Advisor on Information Integrity for the UNHCR, is spearheading these efforts from the United Kingdom. Her team focuses specifically on addressing the harmful impact of misinformation, disinformation, and hate speech on displaced populations and humanitarian operations.
As digital platforms continue to evolve, the UNHCR’s work underscores the urgent need for coordinated action to ensure these technologies serve as tools for connection and support rather than vehicles for further marginalization of already vulnerable communities.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


7 Comments
The shift from fringe to mainstream discourse of extremist content and misinformation is very alarming. Platforms’ initial optimism about global connectivity has clearly faded as the real-world harms become more apparent. More robust action is needed.
This is a complex and multifaceted issue. While digital platforms can enable connection, the risks of misinformation and hate speech impacting vulnerable populations are significant. Collaborative efforts to combat these threats are essential.
The proliferation of manipulated information and extreme content on social media is very concerning. Suppressing trusted sources while amplifying falsehoods can directly undermine humanitarian efforts and put displaced populations at further risk.
Agreed. Platforms must be more proactive in curbing the spread of disinformation that targets vulnerable groups. Stronger content policies and enforcement are needed.
The example of coordinated disinformation and hate speech against the Rohingya community highlights how these digital threats can have devastating real-world consequences. Platforms must take greater responsibility in curbing such malicious content.
This is an important issue. Platforms need to take a stronger stance against misinformation and hate speech, especially when it impacts vulnerable communities. Fact-checking and content moderation are crucial to protect the integrity of information online.
It’s troubling to see how digital threats have emerged as a growing concern for the UNHCR. Ensuring information integrity is critical to support displaced communities and uphold humanitarian principles. Platforms need to do more to address these challenges.