Listen to the article
Police in Oregon are warning about a new threat to public safety: artificial intelligence apps that monitor police radio frequencies, generate blog posts about incidents, and spread dangerous misinformation in the process.
Apps like CrimeRadar, which converts police radio chatter into AI-written content, are creating serious problems by misinterpreting officer communications and publishing false information that quickly spreads across social media platforms, according to a report by Central Oregon Daily News.
One alarming example involved a community outreach program called “Shop with a Cop,” where officers take children shopping for holiday gifts. The AI misinterpreted this as “shot with a cop” and generated content suggesting an officer had been wounded in the line of duty.
“That’s scary for our community,” said Bend police communications manager Sheila Miller. “It’s really scary for police spouses or police family members. And it’s just wrong. And they don’t… there’s no accountability.”
This issue extends beyond Oregon and CrimeRadar. Earlier this year, an investigation by 404 Media discovered that the popular crime-awareness app Citizen was using AI to write alerts without human review, resulting in factual errors and even the exposure of sensitive personal data, including license plate numbers.
“The next iteration was AI starting to push incidents from radio clips on its own,” an insider source at Citizen told 404 Media. “There was no analyst or human involvement in the information that was being pushed in those alerts until after they were sent.”
The problem represents a growing intersection between law enforcement and artificial intelligence technologies. While many police departments have embraced AI for various applications intended to improve efficiency, these same technologies are creating new challenges that affect both police operations and public safety.
The rise of AI-generated misinformation about police activities comes as law enforcement already grapples with other AI-related issues. Reports have emerged of children using AI image manipulation tools to create convincing fake scenarios that prompt unnecessary 911 calls. Simultaneously, some departments have faced criticism for using facial recognition technology that has led to wrongful arrests based on algorithmic misidentifications.
Law enforcement experts are particularly concerned about the potential for these technologies to exacerbate existing problems with online misinformation. As AI tools become more sophisticated and widely available, the potential for misuse grows exponentially.
Other AI technologies raising alarm include image generation tools like Google’s Nano Banana app, which some experts fear could be used to frame innocent people for crimes. Meanwhile, voice cloning technology is already being exploited by scammers to impersonate victims in sophisticated phishing schemes, prompting warnings from federal agencies including the FBI.
Despite these growing concerns, police radio monitoring apps continue operating within a regulatory vacuum. Eric Magidson, an IT professor at Central Oregon Community College, told Central Oregon Daily News that without targeted legislation, these problematic applications will likely continue proliferating.
The situation highlights the broader challenge facing society as AI technology evolves faster than regulatory frameworks can adapt. While AI offers potential benefits for law enforcement, including efficiency improvements and data analysis capabilities, the unintended consequences are becoming increasingly apparent.
For communities already navigating complex relationships with law enforcement, the spread of AI-generated misinformation about police activities adds another layer of potential distrust and confusion. As these technologies continue developing, striking the right balance between innovation and responsible use remains a critical challenge for technology companies, law enforcement agencies, and policymakers alike.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


9 Comments
Nice to see insider buying—usually a good signal in this space.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Uranium names keep pushing higher—supply still tight into 2026.
Interesting update on AI Mistranslates Police Radio Communications, Spreading Misinformation Online. Curious how the grades will trend next quarter.
If AISC keeps dropping, this becomes investable for me.
Uranium names keep pushing higher—supply still tight into 2026.
Good point. Watching costs and grades closely.
The cost guidance is better than expected. If they deliver, the stock could rerate.