Listen to the article
The race to understand online misinformation has reached a critical milestone with the development of a groundbreaking simulation framework by researchers at the University of Murcia. The team, led by Alejandro Buitrago López, Alberto Ortega Pastor, and David Montoro Aguilera, has created a sophisticated system that generates realistic synthetic social networks to study how false information spreads online.
Their innovation addresses a persistent challenge in misinformation research: the ethical and practical limitations of studying real-world social networks. Rather than mining actual user data, the framework creates artificial yet remarkably realistic online environments populated by computer-generated agents with distinct personality traits.
“This represents a significant step forward in our ability to understand the dynamics of online information spread,” said an independent researcher familiar with the work who wasn’t authorized to speak publicly about it. “Until now, we’ve been limited by privacy concerns and data access when studying real platforms.”
The system’s power lies in its sophisticated agent design. Each simulated user possesses demographic-based personality traits and follows predictable behavioral patterns governed by finite-state automata—essentially, rule-based systems that determine how agents will react to various stimuli. This approach allows for both realism and interpretability, a crucial balance for scientific research.
Perhaps most impressive is the content generation system. Powered by large language models similar to those behind ChatGPT and Google’s Gemini, the framework produces contextually appropriate social media posts for each agent. These posts reflect the agent’s established profile and “remembered” experiences, creating an authentic simulation of online discourse.
The researchers have also implemented what they call a “red module” based on DISARM (Disinformation Analysis and Risk Management) workflows. This component allows for the simulation of coordinated disinformation campaigns, enabling researchers to study how malicious actors can manipulate online conversations and test potential countermeasures.
Rigorous validation confirms the framework’s realism across multiple dimensions. Network analysis shows the synthetic platforms exhibit the same “small-world” properties found in real social networks—where most users can be connected through surprisingly few intermediate connections. The simulation also successfully reproduces phenomena like homophily, where users tend to connect with others similar to themselves.
In experiments, the team generated networks with over 10,000 agents, producing more than 7.7 million posts. The simulated users displayed realistic behaviors, including increased emoji usage over time and tendencies to engage with abusive content in predictable ways. Interestingly, abusive agents naturally gravitated toward central positions in the network—a pattern observed in real-world platforms that has concerning implications for online harm.
The system includes a Mastodon-based visualization layer that allows researchers to observe agent interactions in real-time and validate the simulation’s accuracy. This feature transforms abstract data into an intuitive interface that resembles familiar social media platforms.
Industry experts suggest this framework could have far-reaching applications beyond academic research. Social media companies might use similar approaches to test new content moderation policies before deployment. Government agencies could employ such simulations to better understand the dynamics of foreign influence operations and develop more effective countermeasures.
Despite its innovations, the research team acknowledges limitations in the current implementation, particularly around scalability and content evaluation. They’re continuing to refine the system to support even larger simulations and more sophisticated agent behaviors.
The development comes at a critical time, as concerns about online misinformation have intensified globally. With major elections approaching in several countries, tools that help understand and potentially counter coordinated disinformation campaigns could prove invaluable to maintaining information integrity.
The research team’s work represents a significant contribution to the growing field of computational social science, where computer simulations help unravel complex social phenomena that would be difficult or impossible to study through traditional methods.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


12 Comments
While I applaud the potential of this simulation approach, I wonder about ethical concerns around creating synthetic social networks, even for research purposes. How do the researchers ensure the agents’ behaviors don’t become too realistic or harmful?
I’m skeptical about how well this synthetic social network can truly capture the complexity of human behavior and information dynamics online. While it’s a step forward, I wonder if there are still limitations in terms of accurately modeling social influence and virality.
That’s a fair critique. Replicating the nuances of real-world social interactions is an immense challenge. This simulation is likely a work in progress, but a valuable one for advancing our understanding of online disinformation.
This is an exciting development in the fight against online disinformation. Applying it to study the spread of misinformation around energy and natural resources topics could yield important insights for those industries.
As someone who follows the commodities and energy sectors, I’m curious how this simulation framework could be applied to analyze the spread of misinformation around topics like mining, metals, or renewable energy. Might be an interesting angle to explore.
That’s an excellent point. Disinformation can have significant real-world impacts on natural resource industries and energy markets. Applying this simulation approach to those domains could yield important insights.
This simulation approach seems very promising for studying online disinformation. By creating realistic synthetic social networks, researchers can gain valuable insights without the ethical and practical challenges of working with real user data.
Agreed. Modeling individual user behaviors and network dynamics in a controlled environment is a smart way to advance our understanding of how misinformation spreads.
I’m curious to learn more about the specific techniques used to design the synthetic user agents and model their behaviors. The level of realism they achieve will be key to the validity of any insights generated from this framework.
Agreed. The agent design is the core innovation here. Understanding how they incorporate realistic demographic traits, social psychology factors, and information sharing patterns will be crucial.
As someone who follows the mining and metals sectors, I’m very interested in how this simulation could be used to analyze the spread of misinformation around things like supply chain disruptions, commodity price movements, or environmental regulations. Lots of potential applications there.
Fascinating work. The ability to generate realistic test environments for studying disinformation campaigns is a critical need. This framework seems like a promising approach, though the true test will be how well it correlates with actual social media dynamics.