Listen to the article
In the digital age, technology is fundamentally reshaping how people access, consume, and interact with information, particularly scientific content, according to a comprehensive report published by the Royal Society. The study examines the complex relationship between digital technologies and information consumption patterns, highlighting both opportunities and challenges in the evolving online landscape.
The report, titled “The online information environment,” provides an in-depth analysis of how internet technologies are transforming society’s engagement with scientific information and potentially influencing decision-making on critical issues ranging from vaccine uptake to climate change response. As individuals increasingly rely on online sources for news and information, search engines and social media platforms have become powerful gatekeepers, significantly influencing what information reaches consumers.
New technologies, including micro-targeting algorithms, filter bubbles, and sophisticated synthetic content creation tools, are fundamentally altering the information ecosystem. While these technologies offer tremendous potential for education and entertainment, they simultaneously raise concerns about novel forms of online harm and diminishing public trust.
“Science stands on the edge of error and the nature of the scientific endeavour at the frontiers means there is always uncertainty,” explains Professor Frank Kelly, Chair of the report and Professor of Mathematics at the University of Cambridge. He warns against oversimplification of scientific consensus, noting that “prodding and testing of received wisdom is integral to the advancement of science, and society.”
The report makes a critical distinction regarding approaches to misinformation. Rather than advocating for censorship or content removal—which could potentially undermine scientific processes and public trust—it recommends focusing on building population-wide resilience against harmful misinformation while promoting a healthier online information environment.
Scientific misinformation, defined as information presented as factually accurate but contradicting established scientific consensus, spreads through various actors with different motivations. These include well-meaning “Good Samaritans” who unknowingly share false information, “Profiteers” who prioritize engagement over accuracy, “Coordinated influence operators” working to advance specific agendas, and “Attention hackers” who create outlandish content for personal amusement.
The digital landscape has introduced new challenges through increasingly sophisticated technologies. “Deepfakes,” created using artificial intelligence techniques like generative adversarial networks, can produce convincing but entirely fabricated audio and visual content. Meanwhile, “shallowfakes”—existing content presented out of context or crudely edited—represent a simpler but equally problematic form of malinformation.
Automated bots represent another technological dimension of the information environment. While some bots serve beneficial functions like countering misinformation or assisting people with disabilities, others can manipulate public opinion, suppress news stories, or facilitate online harassment.
In response to these challenges, emerging technologies are being developed to enhance content provenance. The Coalition for Content Provenance and Authenticity—an initiative supported by tech leaders including Adobe, Microsoft, the BBC, and Twitter—is developing technical specifications to help users determine content authenticity through metadata analysis.
The Royal Society has supplemented the report with a blog series featuring perspectives from industry leaders on regulatory approaches, media strategies against misinformation, and the role of knowledge institutions in this evolving landscape.
As digital technologies continue to advance, the report emphasizes that while misinformation isn’t new, the internet has dramatically accelerated its spread and impact. The findings underscore the delicate balance between embracing technological innovation and safeguarding the integrity of scientific information in public discourse.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
The report raises important questions about the power dynamics at play in the online information ecosystem. Addressing algorithmic biases, filter bubbles, and synthetic content will be critical to safeguarding democratic discourse.
Agreed. Enhancing transparency and accountability around these systems is essential to ensure they serve the public interest, not narrow commercial or political agendas.
As the report highlights, the ubiquity of digital information brings both opportunities and challenges. Ensuring equitable access, fostering digital literacy, and building resilience against manipulation will be vital going forward.
Absolutely. Policymakers and tech companies have a responsibility to address the societal impacts of these trends proactively, while empowering citizens to navigate the online world more effectively.
This is a timely and crucial discussion. As our reliance on digital information sources grows, it’s vital that we develop robust frameworks to uphold the integrity of the information environment. The challenges are complex, but the stakes are high.
This is a complex issue with no easy solutions. Balancing the benefits of new technologies with the need to protect against misinformation and manipulation will require a multifaceted approach involving all stakeholders.
Fascinating look at the shifting information landscape and its implications for society. It’s important to stay vigilant about the risks of misinformation and undue influence, while also harnessing the potential of new tech for positive change.
Agreed. Maintaining a balanced, critical perspective on online information sources is crucial. Fact-checking and media literacy will be key as these trends continue evolving.