Meta exempted top advertisers from standard content moderation process
Meta has exempted some of its top advertisers from the usual content moderation process, protecting its multibillion-dollar business amid internal concerns that the company’s systems have mistakenly penalized top brands.
According to a 2023 internal filing by the Financial Times, the Facebook and Instagram owner introduced a series of “guardrails” to “protect high spenders.”
This was stated in previously undisclosed memos Meta: will “hinder discovery” based on how much money an advertiser has spent on the platform, and that some top advertisers will be reviewed by people instead.
One of the documents suggested that a group called “P95 spenders,” those who spend more than $1,500 a day, are “exempt from ad restrictions” but will still “eventually be subject to manual human review.”
Memos precede this week’s statement by CEO Mark Zuckerberg that Meta is ending its third-party fact-checking program and turning off its automated content moderation as it prepares for Donald Trump’s presidency.
The 2023 filings show that Meta found that its automated systems had incorrectly flagged some of the highest-spending accounts in violation of company rules.
The company told the FT that larger spending accounts were disproportionately subject to false notifications of potential breaches. It did not respond to questions about whether any of the measures in the documents were temporary or ongoing.
Meta spokesman Ryan Daniels said the FT’s report was “simply inaccurate” and “based on a reading of documents that clearly state that this effort was intended to address something we’ve been very public about; .
Advertising accounts for the majority of Meta’s annual revenue, which is estimated to be around $135 billion in 2023.
The tech giant typically serves ads using a combination of artificial intelligence and human moderators to stop violations of its standards in an effort to remove material such as scams or harmful content.
In a document titled “Preventing High Spender Mistakes,” Meta says it has seven protection bars that protect business accounts that generate more than $1,200 in revenue over a 56-day period, as well as individual users who spend more than $960 on advertising. . period.
It wrote that the guardrails help the company “determine whether detection should proceed” and are designed to “suppress detections”. . based on characteristics such as the level of advertising expenditure”.
It cited as an example a business that is “in the top 5 percent of revenue.”
Meta told the FT that it uses “higher spend” as a hedge because it often means a company’s ads will have a greater reach, and therefore the consequences could be more severe if a company or its ads deleted by mistake.
The company also admitted it prevented some high-spending accounts from being disabled through its automated systems, instead sending them for human review when the company was concerned about the accuracy of their systems.
However, it said all businesses are still subject to the same advertising standards and no advertiser is exempt from its rules.
In the Preventing High-Cost Mistakes memo, the company rated different categories of guardrails as “low,” “medium,” or “high” in terms of being “defensive.”
Meta staff rated the practice of having cost-related guardrails as “low” defensible.
Other defensive levers, such as using knowledge of a business’s credibility to help it decide whether policy violation detection should be applied automatically, were labeled as “high” defensiveness.
Meta said the term “defensible” refers to the difficulty of explaining the concept of barriers to stakeholders if they are misinterpreted.
The 2023 filings do not name the high spenders who fell under the company’s protection bars, but the spending thresholds suggest that thousands of advertisers could be considered exempt from the normal moderation process.
Estimates from market intelligence firm Sensor Tower show that the top 10 US spenders on Facebook and Instagram include Amazon, Procter & Gamble, Temu, Shein, Walmart, NBCUniversal and Google.
Meta has posted record revenues in recent quarters and its shares are trading at all-time highs as the company recovers from a pandemic slump in the global ad market.
But Zuckerberg has warned of threats to his business, from the rise of AI to ByteDance-owned rival TikTok, which has grown in popularity among younger users.
A person familiar with the documents claimed the company “prioritises profit and profit over user integrity and health,” adding that internal concerns were raised about bypassing the standard moderation process.
Zuckerberg said Tuesday that the complexity of Meta’s content moderation system had “caused too many errors and too much censorship.”
His comments came after Trump last year accused Meta of censoring conservative speech and suggested that if the company meddled in the 2024 election, Zuckerberg would “spend the rest of his life in prison.”
Internal documents also show that Meta discussed applying other perks to some of the highest-spending advertisers.
In a memo, Meta staff suggested “offering more aggressive protection” against over-moderation to what it calls “platinum and gold spenders,” which together generate more than half of ad revenue.
“Enforcing false positives against high-value advertisers costs Meta revenue [and] destroys our trust,” the memo reads.
It offered an exemption for these advertisers from certain enforcement, except in “very rare cases”.
The memo shows that staff concluded that platinum and gold advertisers were “not an appropriate segment” for the broad exclusion, as about 73 percent of its enforcements were justified, according to the company’s tests.
Internal documents also show that Meta has identified multiple AI-generated accounts in the big spender categories.
Meta has previously been monitored to set exceptions for critical users in 2021 Facebook whistleblower Francis Haugen leaked documents show the company had an internal system called “cross-check” designed to check content from politicians, celebrities and journalists to make sure posts weren’t mistakenly removed.
According to Haugen’s documents, this was sometimes used to protect some users from coercion, even if they violated Facebook’s rules known as “whitelisting.”
Meta’s supervisory board, an independent “Supreme Court”-style body funded by the company to oversee its most difficult moderation decisions, found that the cross-checking system was allowing dangerous content online, requiring an overhaul of the system, which Meta said has since embarked on.