Listen to the article

0:00
0:00

Tech Leaders Collaborate on Addressing Synthetic Media Threats to 2020 U.S. Election

In preparation for the 2020 U.S. presidential election, the Carnegie Endowment for International Peace has brought together over 100 experts from three dozen organizations to address growing concerns about synthetic and manipulated media. These private meetings, which included representatives from major technology companies and other stakeholders, aimed to develop strategies to combat the potential disruption that digitally falsified content could cause during the election.

The collaboration focused on establishing common definitions of inappropriate election-related synthetic media, creating effective response playbooks, and equipping platforms with ethical guidelines for content moderation. To inform these discussions, Carnegie commissioned four research papers exploring the legal, ethical, and practical dimensions of addressing synthetic and manipulated media.

“The rise of what we might call ‘digitized impersonations’ is becoming increasingly concerning to lawmakers and scholars who recognize the risks they pose to both individuals and society,” notes Thomas E. Kadri, resident fellow at Yale Law School, in one of the commissioned papers.

The initiative distinguishes between two types of falsified media. “Synthetic media,” commonly known as deepfakes, refers to digital falsifications created using AI techniques, while “manipulated media” encompasses other digital alterations, sometimes called “cheapfakes.” Not all such content is harmful—these technologies have legitimate uses in education, art, and political commentary—but their potential for election interference has raised significant concerns.

Robert Chesney and Danielle Citron, whose work is cited extensively in the Carnegie papers, have catalogued numerous scenarios where synthetic media could disrupt elections. These include fake videos showing officials taking bribes, making racist statements, or meeting with criminals. A particularly damaging scenario could involve a “fake audio clip ‘revealing’ criminal behavior by a candidate on the eve of an election,” with little time for debunking before voters go to the polls.

The legal implications of synthetic and manipulated media are complex. An outright ban would violate First Amendment protections, as “falsity alone” does not remove expression from constitutional protection. Various legal frameworks might apply in specific cases, including intellectual property law, right of publicity claims, defamation torts, and criminal statutes against impersonation, but each has significant limitations.

Section 230 of the Communications Decency Act poses a particular challenge, as it largely immunizes platforms from liability for user-generated content. This gives platforms significant discretion in how they respond to fabricated media.

The research suggests that platforms should treat synthetic and manipulated media intended to influence the 2020 election as “ethically problematic by default,” according to David Danks and Jack Parker of Carnegie Mellon University. Their analysis proposes that the burden of proof should rest with those defending such content, particularly when it could harm individuals or undermine public trust.

When considering platform responses, Megan Metzger of Stanford University’s Global Digital Policy Incubator reviewed existing research on misinformation interventions. She found that removal of content is most effective when implemented quickly across all platforms, while “downranking” can reduce exposure by approximately 80 percent according to data from Facebook.

Labeling content as false or manipulated shows promise, though its effectiveness varies significantly based on implementation. “Pre-exposure warnings are typically more effective than post-exposure warnings,” Metzger notes, “especially when the warning educates people about the impacts of receiving misinformation.” The most effective warnings are intrusive, requiring users to opt-in before viewing potentially manipulated content.

The ethical framework proposed by Danks and Parker suggests platforms have a responsibility to consider the interests of multiple stakeholders: content producers, platforms themselves, media consumers, directly impacted individuals, and the broader political and social communities. They argue that platforms can ethically remove content that “presents false information about someone that will likely lead to significant harm to the target without the target’s consent.”

As the 2020 election approaches, this collaborative work between technology platforms and researchers provides a foundation for addressing what could become a significant threat to electoral integrity. The initiative represents a proactive effort to develop both technical and ethical guidelines before manipulated media can substantially disrupt the democratic process.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

22 Comments

  1. Elizabeth Hernandez on

    Synthetic media poses significant risks, but also has legitimate applications. Appreciate the nuanced look at the legal, ethical, and practical considerations involved.

    • Amelia Jackson on

      Yes, navigating those tradeoffs will be crucial. Transparency and inclusive policymaking will be key as this technology continues to evolve.

  2. Addressing the legal and ethical implications of synthetic media is a critical challenge. Curious to see what practical solutions emerge from this expert collaboration.

    • Elizabeth Davis on

      Same here. Finding the right balance between content moderation, free expression, and technological innovation will be an ongoing process.

  3. Isabella Brown on

    Glad to see experts tackling the complex legal and ethical dimensions of synthetic media. Defining clear boundaries and guidelines for responsible use will be crucial.

    • Isabella V. Jones on

      Yes, getting the policy framework right is essential. This is a fine line to walk between protecting against abuse while enabling beneficial applications.

  4. Liam F. Williams on

    Combating the potential misuse of synthetic media in elections is a critical challenge. Glad to see experts from diverse backgrounds collaborating on this issue.

    • Oliver W. Martinez on

      Agreed, a coordinated, multi-stakeholder approach is essential. Looking forward to seeing the insights and recommendations that emerge from this initiative.

  5. Linda M. Martinez on

    Interesting to see tech leaders collaborating on this issue. Synthetic media is a complex challenge without easy solutions, but I’m hopeful this effort can make meaningful progress.

    • Me too, cross-sector collaboration is essential for developing effective and balanced approaches. Looking forward to seeing what insights emerge from this work.

  6. Synthetic media poses complex challenges around legal liability, privacy, and public trust. Glad to see experts from tech, policy, and academia collaborating to develop ethical guidelines and response strategies.

    • Olivia Jackson on

      Agreed, this is an important issue that requires a coordinated, multi-stakeholder approach. Looking forward to seeing the research and recommendations that emerge from this effort.

  7. Combating the spread of digitally fabricated content during elections is critical to protecting democratic processes. Curious to learn more about the specific definitions, playbooks, and moderation frameworks being developed.

    • Me too, the details around those frameworks will be key. Striking the right balance between content moderation and free expression will be an ongoing challenge.

  8. Proactive efforts to address synthetic media threats to elections are critical. Curious to learn more about the specific strategies and guidelines being developed through this initiative.

    • Agreed, the details will be key. This is a rapidly evolving challenge that requires constant vigilance and adaptation.

  9. The potential for synthetic media to undermine trust and integrity in our information landscape is deeply concerning. Appreciate the collaborative effort to address these risks.

    • Agreed, the threat to democratic institutions and public discourse is real. Equipping platforms and the public with the right tools and knowledge will be vital.

  10. Synthetic media is a double-edged sword – it has many legitimate uses, but also potential for abuse. Appreciate the nuanced look at the legal, ethical, and practical implications.

    • Jennifer Rodriguez on

      Agreed, the research papers will likely shed important light on navigating those tradeoffs. Transparency and inclusive policymaking will be vital as this technology continues to evolve.

  11. The rise of digitized impersonations is indeed concerning, especially in the context of elections. Glad to see a concerted effort to develop effective response strategies.

    • Agreed, this is a threat that requires a comprehensive, multi-faceted approach. Looking forward to seeing the research and recommendations that come out of this initiative.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.