Listen to the article
Meta’s Oversight Board Calls for Overhaul of ‘Incoherent’ Manipulated Media Policy
Meta’s Oversight Board has called on the social media giant to revise its policy on manipulated media after upholding a decision to leave up an altered video of President Joe Biden on Facebook. The board described the current approach as “incoherent” and “confusing to users,” highlighting particular concerns during a year of unprecedented global elections.
The investigation began last October after the board examined a video of Biden that had been edited to show him repeatedly touching his granddaughter’s chest. In reality, the footage showed Biden and his granddaughter exchanging “I Voted” stickers during the 2022 midterm elections, but the manipulation created a misleading impression of inappropriate behavior.
After reviewing the content, the board concluded that while the video was manipulated, it did not violate Meta’s current policy standards. This is because the company’s policy specifically prohibits AI-generated deepfakes that alter speech, but not other forms of video manipulation. Since the Biden video was not created using AI and did not depict the President saying something he never said, it technically fell outside the scope of Meta’s existing rules.
“The Board finds that Meta’s Manipulated Media policy is lacking in persuasive justification, is incoherent and confusing to users, and fails to clearly specify the harms it is seeking to prevent,” the report stated. “In short, the policy should be reconsidered.”
The board, which operates with funding from Meta but maintains independent governance, made several recommendations for policy improvements. It urged Meta to “quickly” reconsider its approach, especially considering 2024’s “record number of elections” worldwide. The board called for the creation of a “single unified policy” on manipulated content, suggesting that the policy should be expanded to include all audio and visual content regardless of how it was created or altered.
Rather than simply removing content that falls into gray areas, the board recommended implementing a labeling system that would indicate when media has been “significantly altered and may mislead.” This approach would provide users with context while still allowing the content to remain accessible.
The recommendations come as social media platforms face increasing scrutiny over their role in spreading misinformation, particularly around elections. With advanced editing tools and AI becoming more accessible, distinguishing between genuine and manipulated content presents a growing challenge for platforms, users, and regulators alike.
Meta spokesperson Corey Chambliss told Forbes that the company is reviewing the board’s decision and will publicly respond to the recommendations within 60 days. Neither the White House nor the Biden campaign immediately responded to requests for comment on the matter.
The case highlights the complex challenges social media companies face in moderating content. As technology evolves, policies that once seemed adequate may quickly become outdated or insufficient. The distinction between AI-generated deepfakes and traditionally edited misleading content may be technically important, but the potential harm to viewers who are misled remains similar regardless of the method used to create the deception.
Industry experts note that as the 2024 presidential election approaches, the prevalence of manipulated media is likely to increase, putting additional pressure on platforms to develop clear, consistent policies that users can understand and that moderators can enforce effectively.
The Oversight Board’s recommendations represent an attempt to push Meta toward a more comprehensive approach to manipulated media—one that focuses less on the technical means of creation and more on the potential for content to mislead users in harmful ways.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


17 Comments
The oversight board’s findings on Meta’s ‘manipulated media’ policy highlight the need for greater clarity and consistency in content moderation. As video editing tools become more sophisticated, platforms must adapt their approaches to stay ahead of potential misuse.
As someone interested in the intersection of technology and politics, I’m curious to see how Meta’s policy evolves. The oversight board’s recommendations seem reasonable – an ‘incoherent’ approach does little to build trust or protect against misinformation.
Agreed. Consistent and transparent content moderation policies are vital, especially for a platform as influential as Facebook. This feedback provides a good opportunity for Meta to reassess and strengthen its approach.
This is an interesting development in the ongoing debate around social media content moderation. The oversight board’s assessment of Meta’s policy as ‘incoherent’ is a fair criticism that deserves attention. Curious to see how the company responds.
Agreed. Maintaining the balance between free expression and preventing the spread of misinformation is an immense challenge for platforms like Meta. Their policies must evolve to keep pace with technological advancements.
This is an important issue that goes beyond just the Biden video incident. The oversight board’s critique of Meta’s ‘incoherent’ policy underscores the broader challenges of policing manipulated media in the digital age. Their recommendations seem reasonable and worth implementing.
The oversight board’s feedback on Meta’s ‘manipulated media’ policy highlights the ongoing complexities of content moderation in the digital age. Differentiating between legitimate editing and malicious manipulation is a significant challenge that platforms must continuously refine their approaches to address.
The oversight board’s assessment of Meta’s ‘manipulated media’ policy as ‘confusing to users’ is a fair criticism. As video editing tools become more sophisticated, platforms must adapt their content moderation approaches to keep pace and protect against the misuse of these technologies.
Interesting take on Meta’s ‘incoherent’ manipulated media policy. It highlights the challenges of policing video content in an era of sophisticated editing tools. Curious to see how Meta responds and updates its guidelines to better address these issues.
I agree, the oversight board’s critique of the policy as ‘confusing to users’ is valid. Clear and consistent content moderation standards are crucial, especially around elections.
This is an important issue that deserves attention. The oversight board’s critique of Meta’s ‘incoherent’ manipulated media policy points to the need for clearer, more robust guidelines to combat the spread of misinformation, especially around sensitive topics like elections.
The Biden video incident underscores the need for more robust policies to combat misinformation and protect democratic processes. While not a deepfake, the manipulation was still misleading. Meta should take this feedback seriously.
Absolutely. With the proliferation of editing tools, the line between legitimate video editing and ‘manipulated media’ is becoming increasingly blurred. Meta needs to provide more clarity for users.
As someone who follows tech policy issues, I’m not surprised to see the oversight board call out Meta’s ‘incoherent’ approach to manipulated media. Consistent and transparent content moderation is crucial, especially around sensitive topics like elections. Curious to see how Meta responds.
The oversight board’s assessment of Meta’s ‘manipulated media’ policy as ‘confusing to users’ is a valid concern. With the proliferation of editing tools, the line between legitimate and malicious video manipulation is often blurred. Meta needs to provide clearer guidelines to address this.
The oversight board’s critique of Meta’s ‘manipulated media’ policy highlights the complexities of content moderation in the digital age. Differentiating between legitimate editing and malicious manipulation is an ongoing challenge that platforms must grapple with.
Absolutely. As video editing technology continues to advance, the potential for misleading content to spread rapidly is a real concern. Meta would be wise to heed the board’s advice and work towards clearer, more robust policies.