Listen to the article
In a landmark investigation, cybersecurity experts have exposed a concerning trend in artificial intelligence systems that could endanger reproductive healthcare access for Wisconsin residents. The nonpartisan watchdog group Campaign for Accountability recently found that popular AI platforms are spreading potentially harmful misinformation about abortion resources when users seek help online.
The investigation revealed that 70% of responses from major AI chatbots, including ChatGPT, Google, Meta AI, Grok, and Perplexity, directed users to Heartbeat International—an anti-abortion organization with over 3,600 “pregnancy help affiliates” worldwide. In half of these instances, this hotline was presented as the only resource and often falsely portrayed as a source of unbiased, authoritative medical information.
The problem is particularly acute in Wisconsin, where crisis pregnancy centers (CPCs) outnumber actual abortion providers by more than three to one. With over 50 CPCs statewide, these facilities—typically staffed by individuals with minimal medical training—have long been criticized for targeting vulnerable pregnant people to prevent them from accessing abortion care.
“By ensuring the phone number women dial when seeking guidance is answered by anti-abortion activists—rather than the woman’s doctor or a medical professional—the anti-choice industry seeks to prevent these unbiased conversations from taking place,” Campaign for Accountability researchers stated in their report.
The misinformation campaign appears to be deliberate rather than accidental. For over a decade, Heartbeat International has collected personal data from people seeking online abortion resources, using this information to enhance its content management system. This strategy helps affiliated CPCs increase their digital reach, create SEO-friendly websites, and appear prominently in search results—which subsequently trains AI systems to amplify their messaging.
“A coordinated group of ideologies may be able to influence AI outputs by producing a far greater volume of content than authoritative, science-based answers,” the researchers warned, noting that this issue could extend beyond abortion to other health concerns, particularly as Robert F. Kennedy Jr.’s Department of Health and Human Services promotes controversial theories about autism and vaccine safety.
The findings come amid intensifying national debates over AI regulation. Earlier this year, an amendment to the federal budget reconciliation bill proposed barring states from enforcing AI laws or regulations for a full decade. In response, Wisconsin Attorney General Josh Kaul joined a bipartisan coalition of over 35 state attorneys general opposing the measure.
“States shouldn’t be barred from acting to stop harms associated with the use of AI,” Kaul said when joining the coalition in May. “I strongly oppose this proposal, which would benefit the AI industry—and in particular those who misuse AI—with serious costs to those who are harmed.”
While that specific amendment was not added to the Republican-backed reconciliation bill passed over the summer, President Trump recently issued an executive order creating a task force dedicated to challenging state AI regulations and restricting broadband funding for states with “overly burdensome” AI laws.
The executive order is expected to face legal challenges, but proponents of AI regulation worry it offers technology companies a clearer path toward evading accountability for harm caused by their systems.
“Prohibiting states from putting in place laws that can help protect against dangers associated with AI would be a major mistake. Congress shouldn’t be sacrificing the interests of the public as a whole in order to benefit big tech,” Kaul said.
Michelle Kuppersmith, executive director of Campaign for Accountability, warns that without state-level protections, AI will likely spread misinformation about more than just crisis pregnancy centers.
“If some AI models continue to prefer information quantity over quality when answering ‘hot button’ medical questions, the vulnerabilities spotlighted in our report likely extend far beyond the topic of abortion,” Kuppersmith said. “Given that we are now seeing once trustworthy entities like HHS prioritizing ideology over science, AI purveyors must be mindful to ensure their training methods are not leading searchers actively toward medical harm.”
For Wisconsin residents, particularly those in rural areas where abortion providers are scarce, the ability to access accurate healthcare information online is increasingly critical—making the push for appropriate AI safeguards an urgent matter of public health.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


10 Comments
It’s disturbing to see AI systems directing vulnerable people to anti-abortion organizations instead of legitimate medical resources. This undermines informed decision-making and access to essential healthcare services. Stricter controls are urgently needed.
While AI can be a powerful tool, this situation demonstrates the risks of deploying it without proper safeguards. Lawmakers must work with experts to develop clear guidelines that ensure these systems provide accurate, unbiased information on reproductive healthcare.
Absolutely. AI should be leveraged to empower and inform people, not deceive or manipulate them on critical healthcare issues.
This is deeply concerning. AI chatbots spreading misinformation about reproductive healthcare access is extremely dangerous and unethical. Lawmakers must act quickly to implement robust regulations and oversight to prevent this from happening.
Wisconsin’s disproportionately high number of crisis pregnancy centers compared to actual abortion providers is deeply troubling. AI chatbots compounding this issue by funneling people to these facilities is an unacceptable violation of reproductive rights.
Absolutely. This situation underscores the need for comprehensive reproductive healthcare access and strong regulations to prevent the exploitation of technology.
Crisis pregnancy centers have a long history of spreading misinformation and targeting vulnerable individuals. It’s very concerning that AI chatbots are now amplifying and legitimizing their harmful agenda. This must be addressed urgently.
Agreed. Allowing AI to direct people to these biased, unregulated facilities is a serious breach of public trust and ethics.
While technology can be a powerful tool, this incident shows the critical importance of responsible AI development and deployment, especially on sensitive topics. Rigorous testing, oversight, and accountability measures are essential to protect public wellbeing.
This highlights the critical need for AI transparency and accountability. Chatbots should not be providing biased or misleading information on sensitive topics like abortion. Rigorous testing and content moderation are essential to protect public wellbeing.