The Oversight Board says Meta needs new rules for AI-generated content

The Oversight Board is once again urging Meta to overhaul its rules around AI-generated content. This time, the board says Meta should create a separate rule for AI content that’s independent of its misinformation policy, invest in more reliable detection tools and make better use of digital watermarks among other changes. 

The group’s recommendations stem from an AI-generated video shared last year that claimed to show damaged buildings in the Israeli city of Haifa during the Israel-Iran conflict in 2025. The clip, which racked up more than 700,000 views, was posted by an account that claimed to be a news outlet but was actually run by someone in the Philippines.

After the video was reported to Meta, the company declined to remove it or add a “high risk” AI label that would have clearly indicated the content had been created or manipulated with AI. The board overturned Meta’s decision not to add the “high risk” label and says the case shines a light on several areas where the company’s current AI rules are falling short.

“Meta must do more to address the proliferation of deceptive AI- generated content on its platforms, including by inauthentic or abusive networks of accounts and pages, particularly on matters of public interest, so that users can distinguish between what is real and fake,” the board wrote in its decision. Meta eventually disabled three accounts linked to the page after the board flagged “obvious signals of deception.”

One of the board’s top recommendations is that Meta create a dedicated rule for AI-generated content that’s separate from its misinformation policy. The rule, according to the board, should include specifics about how and when users are required to label AI content as well as information about how Meta penalizes those who break the rule. 

The board was also highly critical of how Meta uses its current “AI Info” labels, noting that the way they are applied is “neither robust nor comprehensive enough to contend with the scale and velocity of AI-generated content,” especially in times of conflict or crisis. “A system overly dependent on self-disclosure of AI usage and escalated review (which occurs infrequently) to properly label this output cannot meet the challenges posed in the current environment.”

Meta, the board said, also needs to invest in more sophisticated detection technology that can reliably label AI media, including audio and video. The group added that it was “concerned” about reports that the company is “inconsistently implementing” digital watermarks on AI content created by its own AI tools. 

Meta didn’t immediately respond to a request for comment on the Oversight Board’s decision. The company has 60 days to formally respond to its recommendations. 

The decision isn’t the first time the board has been critical of Meta’s handling of AI content. The group has described the company’s manipulated media rules as “incoherent” on two other occasions, and has criticized it for relying on third-parties, including fact checking organizations, to flag problematic content. Meta’s reliance on fact checkers and other “trusted partners” was again raised in this case, with the board saying that it had heard from these groups that Meta “is less responsive to outreach and concerns, in part due to a significant reduction in capacities for Meta’s internal teams.” Meta, the board writes, “should be capable of conducting such assessments of harm itself, rather than rely solely on partners reaching out to them during an armed conflict.”

While the Oversight Board’s decision relates to a post from last year, the issue of AI-generated content during armed conflicts has taken on a new urgency during the latest conflict in the Middle East. Since the start of the US and Israel’s strikes on Iran earlier this month, there has been a sharp rise in viral AI-generated misinformation across social media. The board, which has previously hinted that it would like to work with generative AI companies, included a suggestion that would seem to apply to not just Meta. 

“The industry needs coherence in helping users distinguish deceptive AI-generated content and platforms should address abusive accounts and pages sharing such output,” it wrote.

This article originally appeared on Engadget at https://www.engadget.com/social-media/the-oversight-board-says-meta-needs-new-rules-for-ai-generated-content-100000268.html?src=rss

Read more @ Engadget

Latest posts

Samsung’s Galaxy Watch 8 is easier to recommend now it starts at $260

Samsung’s Galaxy Watch 8 is on sale in multiple colors. | Photo by Amelia Holowaty Krales / The Verge Samsung’s Galaxy Watch 8 is one...

John Deere will pay farmers $99 million over right-to-repair lawsuit

John Deere has agreed to pay farmers $99 million to resolve a class action lawsuit that accused the agricultural giant of preventing farmers and...

The EFF is quitting X

The digital privacy non-profit Electronic Frontier Foundation will no longer be posting on X as of Thursday, largely due to a sharp decline in...

Florida launches investigation into OpenAI

Florida Attorney General James Uthmeier is launching an investigation into OpenAI over public safety and national security risks, as reported earlier by Reuters. In...

ChatGPT has a new $100 per month Pro subscription

OpenAI has announced a new version of its ChatGPT Pro subscription that costs $100 per month. The new Pro tier offers "5x more" usage...

A maverick hacker got Mac OS X running on a Wii

You may already know that emulators can run Wii games on a Mac. But one developer has flipped the script. Bryan Keller now has...

The Metal Gear Solid movie is back on, with Final Destination: Bloodlines directors in charge

A film adaptation of Metal Gear Solid is in the works again, this time from filmmakers Zach Lipovsky and Adam B. Stein, the directors...

Apple is closing three US stores, including the first to unionize

Apple is closing three of its retail stores this summer, including its first location to unionize. The tech company said it plans to permanently...

The team behind 1000xResist is making a game about convincing an AI that it isn’t human

When the team at indie studio Sunset Visitor were wrapping up their critically acclaimed first game, 1000xResist, it wasn't long before they started thinking...

Castlevania headlines a big list of exciting indie game reveals

A big indie showcase has just wrapped up, and now I have a lot more games to look forward to. The third iteration of the...