Listen to the article
Italian Premier Confronts Deepfake Image, Warns of AI Dangers
Italian Premier Giorgia Meloni took to social media Tuesday to publicly address a digitally manipulated image of herself that has been circulating online, highlighting growing concerns about the misuse of artificial intelligence technology in politics.
The deepfake photo, which Meloni shared on her Facebook page, depicted the premier posing in bed while wearing lingerie. The artificially created image had been shared by a social media user named Roberto, who suggested Meloni should be “ashamed” of herself—apparently believing or pretending to believe the fabricated image was authentic.
“Deepfakes are a dangerous tool because they can deceive, manipulate and target anyone,” Meloni warned in her post, addressing the broader implications of such technology. “I can defend myself. Many others cannot.”
The incident comes at a time of increasing global concern about the potential for deepfake technology to spread misinformation, particularly targeting public figures and politicians. The European Union has been at the forefront of regulatory efforts to address AI risks, including deepfakes, through its comprehensive AI Act passed earlier this year.
Meloni, who became Italy’s first female prime minister in October 2022, has maintained a strong media presence while leading her right-wing government. Despite the offensive nature of the manipulated image, she approached the situation with a touch of humor, acknowledging that the photo alteration “actually made me look a lot better.” However, she immediately pivoted to the serious implications, adding, “But the fact remains that, in order to attack and fabricate lies, people will now use absolutely anything.”
Many of Meloni’s followers urged her to report the incident to law enforcement authorities, though it remains unclear whether she plans to pursue legal action against those responsible for creating or distributing the image. Italy, like many European countries, has laws that could potentially be applied to deepfake cases, including those regarding defamation and improper use of personal data.
Digital rights experts have noted that politically motivated deepfakes represent a particularly troubling trend that has accelerated with advances in AI technology. The accessibility of deepfake-generating tools has lowered barriers to creating convincing fake content, raising concerns about potential impacts on democratic processes and public discourse.
This isn’t the first time Meloni’s likeness has generated public attention. In February, a minor church-state controversy emerged when a cherub in a Roman church was observed bearing a striking resemblance to the premier. On that occasion, Meloni responded with good humor, writing on social media: “No, I definitely don’t look like an angel,” accompanied by a laughing emoji.
The latest incident highlights how female politicians often face gender-specific attacks that their male counterparts do not. Studies have shown that a significant percentage of deepfake content targets women, frequently in sexualized contexts designed to humiliate or discredit them.
Media literacy experts emphasize that Meloni’s approach—publicly identifying the image as fake while warning about verification—represents an important strategy in combating misinformation. As AI-generated content becomes increasingly sophisticated, the ability to critically evaluate visual information becomes essential for all social media users.
The premier’s warning about verifying images before sharing them underscores a growing challenge for social media platforms, which continue to struggle with effectively moderating AI-generated content that violates their policies. Despite efforts to implement detection systems, convincing deepfakes often circulate widely before being identified as fraudulent.
As European leaders continue to navigate the rapidly evolving landscape of artificial intelligence, incidents like these demonstrate the personal and political dimensions of technology that can manipulate reality with increasing sophistication.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


10 Comments
This is a troubling development, highlighting the dangers of deepfake technology in the hands of bad actors. Meloni is right to call attention to the potential for such AI-powered manipulation to deceive and target individuals, especially public figures.
Agreed. The spread of misinformation through deepfakes is a serious concern that needs to be addressed through robust regulations and public awareness campaigns.
This incident highlights the need for continued development and implementation of deepfake detection tools and policies. Technological advances should be leveraged to protect individuals and democratic processes from malicious manipulation.
Absolutely. Effective regulation and public-private collaboration will be crucial in staying ahead of bad actors exploiting AI for nefarious purposes.
While the deepfake image is clearly fabricated, Meloni’s response seems measured and appropriate. Calling out these tactics and raising awareness is an important step in combating the misuse of emerging technologies.
Indeed. Proactive action by leaders like Meloni can help limit the potential damage from deepfakes and set a precedent for how to handle such situations.
It’s good to see Meloni taking this issue seriously and using her platform to call attention to the dangers of deepfakes. Ongoing vigilance and a coordinated response will be crucial to stay ahead of this evolving threat.
Agreed. Effective regulation and public-private collaboration will be essential in developing the tools and strategies needed to mitigate the risks posed by deepfake technology.
Meloni’s response seems appropriate and measured. Raising awareness of deepfake risks is an important step, but sustained efforts from policymakers, tech companies, and the public will be needed to combat this growing threat.
Well said. Deepfake technology is a complex challenge requiring a multifaceted approach to protect individuals and institutions from being targeted.