Earlier today, the Oversight Board announced its findings from an appeals process regarding the previous removal of a Columbian political cartoon depicting police brutality. In a statement published online this morning, the board explained why Facebook should remove the artwork from Facebook’s AI-assisted Media Matching Service banks, a system that uses AI-scanning to identify and remove flagged images that violate the website’s content policies. They also argued why the entire current system is massively flawed. “Meta was wrong to add this cartoon to its Media Matching Service bank, which led to a mass and disproportionate removal of the image from the platform, including the content posted by the user in this case,” writes the Oversight Board, before cautioning that “Despite 215 users appealing these removals, and 98 percent of those appeals being successful, Meta still did not remove the cartoon from this bank until the case reached the Board.” The Oversight Board goes on to explain that Facebook’s existing automated content removal systems can—and have already—amplified incorrect decisions made by human employees. This is especially problematic given the ripple effects of such choices. “The stakes of mistaken additions to such banks are especially high when, as in this case, the content consists of political speech criticizing state actors,” it warns. In its recommendations, the board asked that Facebook make Media Matching Service banks’ error rates public and broken down by content policy for better transparency and accountability. Unfortunately, here is where the social media’s “Supreme Court” differs from the one in Washington, DC—despite being an independent committee, Facebook isn’t under any legal obligation to adhere to these suggestions. Still, in making the Oversight Committee’s opinions known to the public, Zuckerberg and Meta execs may at least face additional pressure to continue reforming its moderation strategies.