Listen to the article
AI Expert Warns About Misinformation Challenges While Highlighting Technology Benefits
Artificial Intelligence has become deeply integrated into everyday life, offering significant efficiency gains, but its reliability hinges entirely on the quality of data used in training, according to a leading academic from Kosovo’s premier university.
In a recent episode of “Përballje Podcast,” Mërgim Hoti from the Faculty of Electrical and Computer Engineering at the University of Pristina emphasized that AI should be viewed as a productivity tool rather than a societal threat.
“Artificial intelligence is already part of our lives. It should not be seen as an obstacle, but as something that has accelerated our procedures and daily activities,” Hoti explained during the discussion, which was reported by local news outlet Telegrafi.
The engineering expert highlighted a troubling statistic regarding information integrity in digital spaces, noting that “disinformation is spread about six times more than accurate information due to the use of artificial intelligence.” This multiplier effect represents one of the most significant challenges facing technology platforms and information ecosystems today.
At the heart of AI’s reliability problem lies the training process, Hoti explained. The quality of an AI system’s outputs directly correlates with the quality of inputs used during development. “If it’s trained with disinformation, don’t expect us to have received accurate information,” he cautioned.
This insight comes as particularly relevant as generative AI tools like ChatGPT, Bard, and other large language models become increasingly accessible to the general public. These systems, which operate based on statistical patterns rather than factual understanding, have demonstrated remarkable capabilities but also significant limitations in factual accuracy.
Hoti elaborated on the technical underpinnings of contemporary AI, explaining that these systems work “on the basis of algorithms and statistical data.” This statistical approach to knowledge means that AI systems don’t truly “understand” information in a human sense, but rather reproduce patterns observed in training data.
The solution to these challenges, according to Hoti, remains human oversight. “There should always be a person behind an AI model who verifies whether the model has any errors or mistakes,” he emphasized, highlighting the continued importance of human judgment in technological systems.
Looking toward the future, Hoti painted a picture of an ongoing technological struggle over information integrity. “It will be a war between an AI that misinforms and another AI that tries to remove incorrect information,” he concluded.
This prediction aligns with emerging trends in the technology sector, where significant resources are being allocated to developing AI systems capable of detecting synthetic content, identifying misinformation, and verifying factual claims. Companies like Google, Meta, and OpenAI are simultaneously developing both generative capabilities and detection mechanisms to address these concerns.
As AI continues to evolve and integrate further into information systems, media platforms, and decision-making processes, Hoti’s insights underscore the dual nature of these technologies—offering remarkable efficiencies while simultaneously presenting new challenges for information integrity and public discourse.
The balance between leveraging AI’s benefits while mitigating its risks remains one of the central technological and policy challenges of the current era, requiring cooperation between technologists, policymakers, and the public.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
Interesting insights on the rapid spread of disinformation vs. accurate info. AI is a powerful tool, but quality of training data is critical to ensuring reliability and trustworthiness. We need to be vigilant about fact-checking and curbing the amplification of misinformation online.
Agreed, the speed at which misinformation can spread is alarming. Robust fact-checking and data integrity measures are essential to harnessing the benefits of AI while mitigating the risks.
The exponential spread of disinformation is deeply concerning. AI has incredible potential, but if the underlying data is flawed, the outputs can be highly misleading. Rigorous validation and fact-checking protocols are vital to ensure AI is a force for good, not harm.
This statistic on the disinformation multiplier effect is alarming. It underscores the urgent need for greater transparency and accountability around AI systems. We must find ways to amplify truth and fact-based information at the same scale as misinformation.
Absolutely. Mitigating the amplification of falsehoods should be a top priority for platform owners and policymakers. Proactive measures to boost digital media literacy are also critical.
Hoti’s warning about the disproportionate spread of disinformation is a sobering reminder of the challenges we face in the digital age. While AI brings many benefits, the responsibility to ensure its integrity and responsible deployment lies with both developers and users.
This disinformation challenge highlights the need for greater digital literacy and critical thinking skills. As AI becomes more ubiquitous, empowering people to discern truth from fiction will be crucial. Platforms also have a responsibility to prioritize accuracy and transparency.
Well said. Equipping the public with the tools to navigate the information landscape, while also holding tech companies accountable, will be key to combating the spread of misinformation.