Listen to the article
Northwestern Professor Tackles Fake News with AI-Powered News Recommendations
Professor Edward Malthouse of Northwestern University’s Medill School has embarked on groundbreaking research using artificial intelligence to combat misinformation while helping readers find relevant news content tailored to their interests.
“I think one of the most pressing problems of the day is misinformation,” Malthouse said. “There’s a lot of misinformation out there, and it’s hard to stay ahead of it.”
The National Science Foundation recognized the importance of this work in 2023, awarding Malthouse and collaborators from the University of Minnesota, Twin Cities a significant grant. Their project, “A Research News Recommender Infrastructure with Live Users for Algorithm and Interface Experimentation,” aims to develop AI systems that can effectively personalize news content without altering the original reporting.
“We’re not rewriting any articles,” Malthouse explained. “We’re helping you find stories of interest that match your topical interests using AI systems.”
The grant has enabled Malthouse to fund doctoral research focused on using large-language models (LLMs) for both news recommendations and fake news detection. His team has discovered a promising approach to identify misinformation.
“If you ask an LLM to evaluate a text using these traditional fact-checking questions, and then you bring back the responses to those questions, you can more accurately identify fake news articles than if you just did it from basic machine learning approaches,” he said.
Malthouse brings unique cross-disciplinary expertise to this challenge. After earning a PhD in statistics, he began teaching at Northwestern’s Kellogg School of Management in 1995 before joining the Integrated Marketing Communications faculty in 1997. He currently serves as the Erastus Otis Haven Professor at Medill and has directed the Spiegel Research Center since 2012.
The Spiegel Research Center analyzes consumer behavior across platforms to provide valuable insights to companies in marketing and advertising. “It’s about understanding how newer forms of customer engagement affect financial outcomes,” Malthouse noted.
His position with Spiegel and work with Medill’s Local News Initiative allows Malthouse to incorporate real-world research into his teaching. Students benefit from access to actual data sets from news organizations and other companies—a rarity in academic settings.
“It’s unusual for students to get access to these data sets from real companies,” he said. “I’m really grateful to the news organizations and other organizations that have provided this data to give to the students.”
As news organizations grapple with the implications of artificial intelligence, Malthouse provides historical perspective on technological adoption in journalism. He recalls faculty debates from two decades ago about whether students should be permitted to use Wikipedia as a source, drawing parallels to current concerns about AI.
Malthouse advocates for responsible AI adoption in newsrooms, seeing it as a tool to enhance productivity rather than replace traditional journalistic methods. He emphasizes that AI should supplement newsgathering while maintaining rigorous verification standards.
“You need to be taught ways to corroborate what you find, and find independent sources that say the same thing,” he advised. “I think it can be a very powerful tool to enhance your productivity, but you have to check everything first.”
The research comes at a critical moment for the news industry, which faces dual challenges of declining revenues and rising misinformation. By developing tools that both combat fake news and help readers discover trustworthy content aligned with their interests, Malthouse’s work addresses fundamental issues threatening journalism’s sustainability and democratic function.
As the media landscape continues to evolve rapidly, collaborations between academic researchers and news organizations may become increasingly vital. Malthouse’s interdisciplinary approach—combining statistics, marketing analysis, and journalism studies—represents the kind of innovative thinking needed to navigate the complex intersection of technology, media economics, and information integrity in the digital age.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


11 Comments
Kudos to Professor Malthouse and the team for their work on this important project. Combating misinformation is a complex challenge, and leveraging AI for personalized news recommendations could be a valuable tool in that fight.
I agree. The focus on maintaining the integrity of the original news content is crucial. It will be interesting to see how they balance personalization with preserving journalistic standards.
The idea of using AI to help readers find relevant news content without altering the original reporting is an intriguing one. I’ll be following this project with great interest to see how it evolves and what impact it may have on the media landscape.
Personalized news recommendations driven by AI could be a game-changer for media consumers. Helping people discover information tailored to their interests, without altering the original reporting, seems like a balanced and effective solution.
Agreed. Maintaining the integrity of the original news content is crucial. I’m glad the researchers are taking that approach.
Kudos to Professor Malthouse and the team for securing this important grant. Combating misinformation is a critical challenge, and leveraging AI to improve news discovery and consumption is a smart move.
The focus on using large language models for both recommendations and misinformation detection is particularly intriguing. I’m curious to see how they balance personalization with maintaining accuracy and objectivity.
Personalized news recommendations powered by AI have a lot of potential, but also raise concerns about filter bubbles and echo chambers. I hope the researchers find ways to promote diverse perspectives and fact-based journalism through their work.
Addressing misinformation is a crucial challenge, and this data-driven approach sounds promising. I’m curious to learn more about the specific techniques the team plans to use for personalization and misinformation detection using large language models.
This is an intriguing approach to combating misinformation. Using AI-powered news recommendations to help readers find relevant, factual content is a smart way to tackle a growing problem. I’m curious to see how the project evolves and what insights it yields.
This is a fascinating project. Helping readers find relevant news content while preserving the original reporting is a delicate balance. I’m eager to see the results of this research and how it could shape the future of news consumption.