Meta’s Oversight Board takes on the Israel-Hamas War
The board promises to move quickly for a change — but is it focused on the right problem?
Today let’s talk about the Oversight Board’s move to weigh in on the ongoing controversy about how Meta is moderating content related to the war between Israel and Hamas. The board is making an effort to show that it can prove useful during an unfolding crisis. But given the speed and scale of the conflict — and Meta’s cool reception to many of the board’s recent ideas — it’s unclear whether the result will go much beyond restoring a couple of posts to Facebook and Instagram.
On Thursday the Oversight Board, a Meta-funded but independent body that is empowered to make binding decisions about whether posts on the company’s apps should come down or stay up, announced it would take two cases stemming from the Israel-Hamas war. For the first time, the board said it would conduct its review on an expedited basis — meaning that its decision could come in as soon as 48 hours, and up to 30 days. (Regular decisions typically take the board about three months.)
The first case selected today concerns an Instagram post:
[It] includes a video showing what appears to be the aftermath of a strike on a yard outside Al-Shifa Hospital in Gaza City. The content, which was posted on Instagram in early November, shows people, including children, injured or dead, lying on the ground and/or crying. A caption in Arabic and English below the video states that the hospital has been targeted by the “usurping occupation,” a reference to the Israeli army, and tags human rights and news organizations. Meta initially removed the post for violating its rules on violent and graphic content.
The second is a Facebook post.
[It] shows a woman begging her kidnappers not to kill her as she is taken hostage and driven away on a motorbike. The woman is seen sitting on the back of the motorbike, reaching out and pleading for her life. The video then shows a man, who appears to be another hostage, being marched away by captors. In a caption, the user who posted the content describes the kidnappers as Hamas militants and urges people to watch the video to gain a “deeper understanding” of the horror that Israel woke up to on October 7, 2023. The user posted the content around a week after the October 7 attacks. Under its Dangerous Organizations and Individuals policy, Meta has designated Hamas as a Tier 1 dangerous organization and designated the October 7 attacks as a terrorist attack.
Meta initially said the post violated two rules: one prohibiting violence and incitement, and one prohibiting content that depicts certain terrorist incidents at the moment of the attack when victims are visible. (The latter rule was a temporary revision to Meta’s community guidelines, which will become important in a minute.)
The removal of the first post, then, speaks to anxieties that Meta is acting too aggressively in restricting speech by Palestinians who are speaking out about the terrible impact of the war on civilians. The removal of the second post reflects the opposite anxiety: that Meta is acting too aggressively to silence Israelis who are speaking out about the atrocious October 7 attacks and their aftermath.
In any case, neither removal would stand. Meta restored the first post as soon as the board told the company it was selecting it for appeal; the company restored it to Instagram behind a warning screen.
The second post is more complicated. After October 7, Meta had banned users from showing hostages being taken in the conflict. But users kept posting them anyway, often to raise awareness of the hostages’ plight, and on Tuesday — presumably after the board told Meta it was going to hear this case, too — the company revised its policy once again to allow these sorts of videos so long as they are condemning the attacks.
In both cases, then, simply by taking the case, the board prompted Meta to reverse itself. This isn’t uncommon. Over the past three years, the board’s announcement of a case coincides with Meta announcing that it has reversed its original decision even before the case can be heard. I often complain that the board hears too few cases and acts too slowly in general, but this is one way in which the board does spur quick action. If you believe that Meta erred in its initial decisions in the above cases, you should be glad that the board intervened and got them restored to Facebook and Instagram within a few weeks.
At the same time, Meta’s quick action in response to the board could have the odd effect of making the board’s expedited review moot. The Oversight Board typically only takes cases where it has strong reason to believe that Meta has made a mistake. Assuming that was the situation here, Meta has already resolved the issue. It doesn’t matter whether the decision comes in 48 hours, 30 days or 30 years — the posts have already been restored.
The board could go further, as it often does, and make policy recommendations to Meta. These are non-binding, though in practice, Meta often implements them in whole or part. (In 2021, the company partially or fully implemented 55 board recommendations, and rejected 32.)
A board spokesman wouldn’t tell me if members plan to issue policy recommendations on these cases. But we know it is concerned about over-enforcement of moderation guidelines against Palestinians generally. The board previously requested that Meta conduct a human rights assessment of its impact during an escalation in the conflict between Israel and Palestine in May 2021; the resulting report found that “Meta’s actions … appear to have had an adverse human rights impact … on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred.” Much of this seems to be the result of reliance on automated enforcement systems.
Assuming it does want to see some policy changes, though, the question is where the board could find leverage to meaningfully improve the system. Meta’s current policy on violent and graphic content is already fairly nuanced. While it bans gore in most contexts, the policy also carves out an exemption for the exact sort of post in the first case above:
In the context of discussions about important issues such as human rights abuses, armed conflicts or acts of terrorism, we allow graphic content (with some limitations) to help people to condemn and raise awareness about these situations.
If that’s the case, what happened? I’m speculating, but it seems likely that one of the company’s contracted content moderators — or automated systems — made a mistake. It’s a deeply unsatisfying answer, particularly given the high-stakes nature of the error. But it’s also to be expected. The board reports that the number of appeals it has received since October 7 has tripled — a sign of the surge in posts related to the conflict across Meta’s platforms.
But content moderation at Meta has not scaled with the surge. The company’s layoff-heavy “year of efficiency” included cuts to moderation teams, though the company has played down the impact of the job losses. But still, of course the company is making mistakes in moderation here. The chaotic nature of war, the stressful and even traumatic character of content moderators’ work, and the sheer flood of content are more than enough to explain why the Oversight Board was able to find two cases in which someone employed by Meta made the wrong call.
The human rights’ assessment suggests that there is something deeper here, of course: that the over-enforcement of rules against Palestinian speech is in fact a policy problem. A primary reason is that Hamas is designated as a terrorist organization, and Meta and other platforms are hyper-sensitive to accusations that they host or promote terrorist propaganda. Determining which posts in a war zone are coming directly from Hamas, and which are coming from average Palestinians, is difficult, nuanced work. Moderators being asked to make judgment calls dozens or even hundreds of times a day are bound to make mistakes — and Hamas’ status as a terrorist organization all but ensures that over-enforcement of rules against Palestinians will continue.
There is no number of cases the board could take, or speed with which it could adjudicate them, that will alter that basic dynamic. And while there are no doubt refinements to Meta’s policies that it can suggest, the far bigger issue here may be that the company lacks the ability to consistently apply its rules at scale.
That’s not a reason to give up on the whole project of the Oversight Board, which remains a valuable check on some of Meta’s worst impulses. And we should be heartened that, three years into its work, its membership has at last summoned the energy to attempt to work quickly.
But as fast as the board can work, content from the conflict in Israel is being posted even faster. As it begins to deliberate, the board should be careful not to mistake an enforcement issue for a policy problem.
On the podcast this week: Kevin and I put Google’s new Gemini Pro model through its paces. Then, we consider the prospects of Tesla’s Cybertruck. And finally, a look at some of the week’s most important AI news.
Apple | Spotify | Stitcher | Amazon | Google | YouTube
Governing
Governments are spying on Apple and Google users through push notifications, US senator Ron Wyden warns in a letter to the Justice Department. (Raphael Satter / Reuters)
The New Mexico attorney general’s office is suing Meta, alleging that Facebook and Instagram recommend sexual content to minors and promote minors’ accounts to child predators. (Katherine Blunt and Jeff Horwitz / The Wall Street Journal)
As countries around the world debate how to regulate AI, the technology is already outpacing regulators and policies. (Adam Satariano and Cecilia Kang / The New York Times)
The FTC is still trying to stop the already closed Microsoft-Activision deal, arguing that the judge who allowed it held the agency to too high of a standard. (Diane Bartz / Reuters)
The cyberattack on the US Department of Health and Human Services in March 2020, attributed to a state actor, was more serious than the department let on— and turned out to be the largest ever DDoS attack on the US government. (Jordan Robertson and Riley Griffin / Bloomberg)
The United Kingdom accused a unit of Russia’s Federal Security Service of being behind a campaign to hack and leak information about the country’s government officials. (Alexander Martin / The Record)
European regulators are reportedly set to deem iMessage not an “important gateway” for business users, exempting it from regulation under the Digital Markets Act. So green text bubbles it is. (Samuel Stolton / Bloomberg)
AI could revolutionize mass spying and surveillance, security technologist Bruce Schneier warns, and corporations and governments could weaponize that. (Schneier on Security)
Industry
TikTok is reportedly planning a slew of initiatives to combat hate, including a social media campaign featuring creators called “Swipe Out Hate”, following complaints of antisemitism and other hate speech on the platform. (Kaya Yurieff / The Information)
ByteDance is reportedly offering to buy back up to $5 billion worth of shares from investors as its IPO plans stall. (Zhou Xin / South China Morning Post)
A whole bunch of Meta updates today:
Messenger is getting end-to-end encryption by default for direct messages and calls. A big deal and a good thing. (Jay Peters / The Verge)
Tags – like hashtags, but with more words and special characters – arrived globally on Threads. (Jay Peters / The Verge)
Meta’s celebrity-based AI characters are now live in the US for people to chat with on WhatsApp, Messenger and Instagram with support for Bing. Some characters will also “remember” conversations. (Sarah Perez / TechCrunch)
Imagine with Meta, the company’s standalone AI image generator powered by its Emu model, is now available. (Kyle Wiggers / TechCrunch)
Meta AI will now let group chat users create AI images using prompts that can build on top of each other with a new feature called “reimagine.” (Sarah Perez / TechCrunch)
WhatsApp voice messages can now be set to disappear after being listened to once. (WhatsApp)
Many Instagram videos posted before late 2014 have lost their audio, an issue that seemingly started this year. Meta said it’s working to fix the issue. (Jay Peters / The Verge)
Elon Musk ranted that Bob Iger should be fired after Disney pulled its advertising from X. (Oliver Darcy / CNN)
x.AI’s snarky “anti-woke” chatbot, Grok, is now being rolled out to X Premium Plus users. (Maxwell Zeff / Gizmodo)
The AI-powered “Help me write” feature is coming to Chrome, which will rely on both user prompts and context on the site users are writing on. (Kyle Bradshaw / 9to5Google)
Early reactions to Google’s Gemini Pro haven't been great – the model gets basic facts wrong, makes mistakes in translation, and, in response to questions about controversial news, says to Google it. (Kyle Wiggers / TechCrunch)
Amazon seems to be struggling to catch up with generative AI, and inaccuracies in its Q chatbot aren’t helping. (Corey Quinn / Last Week in AWS)
OpenAI’s Sam Altman was named TIME’s CEO of the Year, one year after the release of ChatGPT and following a tumultuous Thanksgiving for Altman. (Naina Bajekal and Billy Perrigo / TIME)
Helen Toner, one of four OpenAI board members who fired Altman, said the move wasn’t about safety — it was about a lack of trust in ensuring that AI systems are built responsibly. (Meghan Bobrowsky and Deepa Seetharaman / The Wall Street Journal)
Twitch is planning to shut down its operations in South Korea, with CEO Dan Clancy saying the country is too expensive to operate in, despite efforts to reduce costs. (Jordan Fragen / VentureBeat)
The new Mammoth 2 app promises to make Mastodon much simpler for users, including features like Smart Lists, similar to the old lists on Twitter. (David Pierce / The Verge)
Those good posts
For more good posts every day, follow Casey’s Instagram stories.
(Link)
(Link)
(Link)
Talk to us
Send us tips, comments, questions, and policy recommendations: casey@platformer.news and zoe@platformer.news.