Listen to the article
In an unprecedented effort to combat misinformation in artificial intelligence, a tech industry leader has assembled what they describe as a global team of one million experts dedicated to fact-checking AI-generated content.
The initiative, revealed exclusively to The Times, represents one of the largest coordinated attempts to address the growing concern of AI hallucinations and factual inaccuracies that have plagued large language models since their mainstream adoption.
“We’re facing a critical moment where the line between AI-generated content and human-created information is increasingly blurred,” said the project’s founder, speaking on condition of anonymity due to the sensitive nature of the work. “Our million-strong team comprises experts from diverse fields including science, history, medicine, law, and journalism.”
The massive operation functions through a sophisticated workflow system where AI-generated outputs are flagged and routed to relevant subject matter experts for verification. The team operates across multiple time zones, ensuring 24-hour coverage of content verification.
Industry analysts note that this development comes at a crucial time. Recent studies from Stanford University’s AI Index Report showed that 68% of consumers have encountered AI-generated misinformation, with many unable to distinguish it from factual content. The economic impact of AI misinformation has been estimated at billions of dollars annually across industries ranging from healthcare to financial services.
Dr. Elena Sorokin, an AI ethics researcher at Oxford University not involved in the project, called the initiative “ambitious but necessary.” She added, “The scale of AI-generated content is growing exponentially. Traditional fact-checking mechanisms simply cannot keep pace without a dramatic expansion of human oversight.”
The team’s composition reflects global diversity, with experts from 87 countries working in 46 languages. This international approach aims to address cultural nuances and regional contexts that AI systems often miss. Particular attention has been given to recruiting experts from the Global South, where AI training data is often underrepresented.
Funding for the massive undertaking remains somewhat mysterious, though sources familiar with the project suggest a consortium of technology companies and nonprofit organizations have pooled resources. The annual operating cost is estimated to exceed $400 million.
Critics question the sustainability of such a human-intensive approach. “While admirable, this seems like putting a Band-Aid on a systemic issue,” said Marcus Wong, director of the Center for Responsible AI at Singapore National University. “The real solution lies in developing AI systems that are inherently more accurate and transparent about their limitations.”
The initiative has also raised questions about power and influence. Some technology ethicists worry about concentrating fact-checking authority within a single organization, even one with diversified expertise.
However, early results appear promising. In a pilot program involving three major news organizations, AI-generated content reviewed by the fact-checking team showed a 94% reduction in factual errors compared to unreviewed content. These improvements were particularly notable in specialized fields like medicine and law.
The project’s leadership has established clear boundaries regarding their role. “We’re not arbiters of truth or censors,” the founder emphasized. “Our job is simply to verify factual claims and provide context where AI systems make demonstrably false statements.”
Looking ahead, the organization plans to make their verification API available to select partners by early next year, potentially allowing integration with popular AI systems and content platforms.
As generative AI continues to reshape information landscapes across journalism, education, and public discourse, this massive human fact-checking initiative highlights both the promise and limitations of current AI technologies. While machines can generate content at unprecedented scale, human judgment remains essential for ensuring accuracy and trustworthiness.
Whether this million-person approach represents a stopgap measure or a long-term solution remains to be seen, but it underscores the complex challenges at the intersection of artificial intelligence and information integrity in our increasingly AI-mediated world.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


29 Comments
Production mix shifting toward Fact Check might help margins if metals stay firm.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Uranium names keep pushing higher—supply still tight into 2026.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
If AISC keeps dropping, this becomes investable for me.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
If AISC keeps dropping, this becomes investable for me.
Exploration results look promising, but permitting will be the key risk.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Interesting update on Tech CEO Claims to Employ Top Experts for AI Fact-Checking Initiative. Curious how the grades will trend next quarter.
Good point. Watching costs and grades closely.
I like the balance sheet here—less leverage than peers.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Interesting update on Tech CEO Claims to Employ Top Experts for AI Fact-Checking Initiative. Curious how the grades will trend next quarter.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
The cost guidance is better than expected. If they deliver, the stock could rerate.
I like the balance sheet here—less leverage than peers.
Good point. Watching costs and grades closely.
Exploration results look promising, but permitting will be the key risk.
Interesting update on Tech CEO Claims to Employ Top Experts for AI Fact-Checking Initiative. Curious how the grades will trend next quarter.
Nice to see insider buying—usually a good signal in this space.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.