Listen to the article
AI-Powered “Social Scraping” Scams Surge as Criminals Exploit Personal Data
The rise of artificial intelligence has created a new threat for social media users as criminals increasingly employ sophisticated “social scraping” techniques to harvest personal information from platforms like Facebook, Instagram, and X. An ABC7 I-Team investigation has revealed how scammers are leveraging AI to efficiently collect and weaponize information shared on social media profiles.
According to digital behavior analysts at cybersecurity firm BioCatch, criminals are now using artificial intelligence to gather information at unprecedented speed and scale. Jonathan Frost of BioCatch explained the mechanics behind the trend: “It doesn’t get lazy. It doesn’t get tired. It will work 24/7.”
This technological advancement allows scammers to rapidly collect, process, and format information into convincing deceptive schemes. “Criminals are using the same sorts of technologies that legitimate businesses are to improve the quality of their deceptions,” Frost noted during his interview with the I-Team.
The scope of the problem appears to be growing rapidly. BioCatch’s 2025 report indicates a 65% increase in global scam attempts, many of which rely on information gleaned from victims’ social media accounts. Common personal details harvested include seemingly innocent information such as a mother’s maiden name, pet names, and favorite restaurants—all data points that can be used to impersonate someone convincingly.
Once armed with this information, scammers execute elaborate schemes such as fake kidnapping scenarios or fabricated car accidents designed to trick family members into sending money urgently. The effectiveness of these scams relies heavily on creating a sense of panic.
“Fundamentally, fraudsters are always about urgency. They always want to create a situation in which you’re going to fail to apply the normal good sense that you would,” Frost explained, highlighting the psychological tactics employed by these criminals.
The integration of AI technology has made these schemes increasingly sophisticated. Earlier in 2024, the ABC7 I-Team demonstrated how artificial intelligence could compile voice samples from social media posts to create convincing voice clones. In a controlled experiment, tech experts from Transaction Network Services used AI to clone a reporter’s voice from news broadcasts and tested the synthetic audio on unsuspecting customers at a local café, revealing the potential for voice-based impersonation scams.
These developments represent a significant evolution in the cybercrime landscape. While social engineering and phishing have existed for years, the addition of AI tools has dramatically increased both the scale and effectiveness of such operations. Criminals can now deploy automated systems to scan thousands of profiles simultaneously, creating detailed dossiers on potential victims with minimal human intervention.
The financial services industry has taken notice of this trend. Many banks and payment platforms are enhancing their fraud detection systems specifically to identify transactions that may result from social scraping scams. These typically include unusual transfer requests to unfamiliar accounts, especially when accompanied by urgent circumstances.
Cybersecurity experts recommend several protective measures. First, users should carefully review their privacy settings on all social media platforms and limit the personal information they share publicly. Security questions for financial accounts should avoid using information that might be discoverable through social media posts.
“I think any form of unexpected approach, whether through a social media messaging platform, an unexpected connection request, or an unexpected phone call should be treated with significant doubt,” advised Frost.
For those who receive suspicious calls claiming a loved one is in danger, experts recommend remaining calm rather than acting immediately. The best practice is to end the call and directly contact the supposedly affected family member through established means of communication. Families are also encouraged to establish security protocols, such as predetermined “safe words” that can quickly verify a caller’s identity during emergencies.
As AI technology continues to evolve, security experts predict that social scraping techniques will likely become even more sophisticated, making digital literacy and cautious online behavior increasingly essential skills for social media users.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


17 Comments
The article highlights a very worrying trend – the use of AI to automate the collection of personal data from social media for criminal purposes. This poses a significant risk to users. Robust cybersecurity measures, enhanced data privacy regulations, and public awareness campaigns are crucial to addressing this evolving threat.
Absolutely. The speed and scale of these AI-driven social scraping operations make them extremely dangerous. Comprehensive action is needed across multiple fronts to protect social media users from these deceptive schemes.
This news about AI-powered social media scams is deeply alarming. The ability of criminals to rapidly harvest personal data and craft convincing deceptions is a serious threat that demands urgent attention. Stronger data protection, improved platform policies, and public education will all be essential to combating this growing problem.
This news about AI-enhanced scams targeting social media users is deeply concerning. The speed and automation of these personal data harvesting operations pose a serious risk. Stronger data privacy protections and public awareness campaigns are urgently needed to address this growing problem.
Disturbing to see how sophisticated these AI-driven social media scams have become. The ability to rapidly collect and leverage personal information is incredibly powerful in the hands of criminals. Robust data security and user education will be critical to combating this emerging threat.
Absolutely. The scale and speed of these operations is alarming. Social media platforms, cybersecurity firms, and government agencies will need to work together to develop effective countermeasures.
Criminals leveraging AI to automate social media scraping for personal data is a deeply concerning development. The speed and scale of these operations make them extremely dangerous. Robust data protection, improved social media platform policies, and public awareness campaigns are critical to addressing this threat.
This is a concerning development. Criminals exploiting AI to automate social media scraping for personal data is a serious threat that needs to be addressed. Increased cybersecurity measures and public awareness campaigns will be crucial to combat these evolving scams.
You’re right, the speed and scale of these AI-powered scraping operations make it much harder to protect against. Stronger data privacy laws and social media platform policies are needed to curb this growing problem.
The rise of AI-enhanced social engineering scams is deeply troubling. Criminals leveraging artificial intelligence to rapidly gather personal data and craft convincing deceptions poses a major risk to social media users. This requires urgent attention from tech companies, regulators, and law enforcement.
Agreed, the article highlights how quickly this threat is escalating. Proactive action is needed before these scams become even more widespread and devastating.
The article paints a troubling picture of how criminals are exploiting AI to streamline the collection of personal data from social media. This automated, large-scale social scraping poses a serious risk to users. Urgent action is needed from tech companies, regulators, and law enforcement to combat these evolving cyber threats.
I agree, the threat is escalating rapidly and requires a comprehensive response. Protecting user privacy and security needs to be the top priority in addressing this issue.
I’m concerned by the findings that criminals are exploiting AI to streamline the process of harvesting personal data from social media. This type of automated, large-scale social scraping is extremely dangerous and needs to be stopped. Protecting user privacy should be a top priority.
This news about criminals leveraging AI to streamline social media scraping for personal data is deeply concerning. The ability to rapidly collect and exploit this information poses a serious threat, especially for vulnerable users. Combating this issue will require a multilateral approach focused on enhancing data privacy, platform security, and public education.
The article highlights a very troubling trend – the use of AI to streamline social media scraping and perpetrate deceptive schemes. This is a stark reminder of the potential misuse of emerging technologies. Combating these evolving cyber threats will require a multi-faceted approach across industry, government, and civil society.
You’re absolutely right. This issue requires a coordinated response to enhance cybersecurity, strengthen data privacy regulations, and educate the public on social media safety. The stakes are high and the threat is only going to grow without decisive action.