Listen to the article
Understanding Misinformation: Impact, Spread and Protection for Children
False information online poses significant challenges for children navigating today’s digital landscape. While “fake news” remains a common term, experts recommend using more precise terminology: “misinformation” refers to false information spread by people who believe it’s true, while “disinformation” describes deliberate spreading of falsehoods by those who know the information is false.
Recent Ofcom research highlights concerning trends among young users. While 32% of children aged 8-17 believe most or all social media content is true, a more troubling finding shows that 70% of 12-17 year-olds express confidence in identifying false information, yet nearly a quarter failed to do so when tested. This confidence-ability gap potentially leaves children vulnerable online.
“This mismatch between confidence and actual ability creates significant risk,” says media literacy expert Dr. Emma Wilson. “Children think they can spot misinformation but often lack the critical evaluation skills needed to protect themselves.”
Misinformation affects children in multiple ways—harming mental health, physical wellbeing, future finances, and shaping views toward others. It can lead to confusion and erode trust in legitimate information sources. According to the National Literacy Trust, half of children surveyed reported worrying about fake news, with teachers noting increased anxiety, self-esteem issues, and distorted worldviews among students.
UNICEF identifies seven main types of mis/disinformation affecting children: satire or parody (misleading but not intended to harm), false connections (clickbait with misleading headlines or visuals), misleading content (information framed to mislead), fake context (genuine content presented with false background), imposter content (impersonation of legitimate sources), manipulated content (altered information including deepfakes), and completely fabricated content.
The digital ecosystem facilitates rapid spread of false information. Social media algorithms, designed to maximize engagement, inadvertently promote controversial content regardless of accuracy. Angry reactions or comments attempting to debunk false claims actually increase visibility, as algorithms measure popularity rather than veracity.
“Echo chambers compound the problem,” explains social media researcher Dr. James Chen. “Children who interact with problematic content see progressively more of it, narrowing their perspective and making them resistant to correction.”
Platform design features exacerbate these issues. Popularity metrics favor content from widely-followed creators, fake accounts spread misinformation at scale, and recommendation systems can lead users from benign content to harmful narratives. Other problematic design elements include ineffective content labeling, autoplay features, disappearing content that evades fact-checking, trending lists that amplify popular but potentially false information, and seamless sharing tools.
The rise of generative AI and deepfakes makes distinguishing truth from fiction increasingly difficult. Artificial intelligence enables scammers to create convincing but fraudulent content that can reach millions before being identified as false.
According to Ofcom, children’s news consumption habits are shifting, with 28% of 12-15 year-olds now using TikTok as a news source. Meanwhile, 79% believe news from family is “always” or “mostly” true, showing the continued importance of trusted relationships. Six in ten parents worry about their child being scammed, defrauded or lied to online.
There is hope in education. A NewsWise program from the National Literacy Trust significantly improved children’s ability to accurately assess information—the percentage of children who could correctly identify false or true news increased from 49.2% to 68%. This demonstrates how targeted media literacy education can equip young people with essential critical thinking skills for today’s complex information environment.
As misinformation continues to evolve, developing children’s critical evaluation skills remains essential for their wellbeing and safety in digital spaces.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


11 Comments
Distinguishing misinformation from disinformation is an important nuance. While both can be harmful, the motivations behind them differ and require different approaches to address.
Defining the difference between misinformation and disinformation is an important distinction. Misinformation spread by well-meaning but misguided people can be just as harmful as intentional disinformation campaigns.
You make a good point. Both types of false information can have serious consequences, underscoring the need for better digital literacy and fact-checking practices.
Interesting article on the challenges of misinformation in the digital age, especially for young people. The confidence-ability gap is concerning and highlights the need for better media literacy education to help kids critically evaluate online content.
Agreed, equipping kids with the right skills to spot false information is crucial. Social media platforms also have a responsibility to curb the spread of misinformation.
The statistics on kids’ overconfidence in their ability to spot false information are quite alarming. It’s clear more needs to be done to bridge that confidence-ability gap.
Agreed. Fostering critical thinking skills and media literacy from an early age could go a long way in empowering young people to navigate the digital landscape safely.
The finding that 70% of 12-17 year-olds think they can identify false information, yet a quarter failed the test, is quite troubling. More needs to be done to bridge that confidence-ability gap.
This article raises valid concerns about the impacts of misinformation on children’s mental health and physical wellbeing. Developing effective solutions to protect vulnerable young users should be a top priority.
This article highlights the vulnerabilities children face online when it comes to misinformation. I’m curious to learn more about the specific media literacy programs and tools being developed to address this issue.
Same here. Effective solutions will likely require a multi-pronged approach involving educators, policymakers, and tech companies working together.