Facebook

Why Do Facebook’s Algorithms Keep Abetting Racism? 

Mark Zuckerberg (Getty)

Call it algorithmic ignorance. Or maybe, algorithmic idiocy. On Thursday,
Pro Publica
uncovered that Facebook’s ad targeting system, which groups users together based on profile data,
offered to sell ads targeting a demographic
of Facebook users that self-reported as “Jew Haters.”

“Jew Haters” started trending on Twitter when the piece went viral, and by Friday Facebook announced it had removed “Jew Haters” and
other similarly ranked groups
from its advertising service, offering a
predictably anodyne apology and explanation
.

Advertisement

As Facebook explains, the categories were algorithmically determined based on what users themselves put into the Employee and Education slots. Enough people had listed their occupation as racist bile like “Jew Hater,” their employer as “Jew Killing Weekly Magazine,” or their field of study as “Threesome Rape” that Facebook’s algorithm, toothless by design, compiled them into targetable categories.

Facebook’s
response
is repetitious in emphasizing that users themselves self-reported the data and Facebook removed the categories as soon as it was aware of them. But claiming ignorance of its own algorithms lets Facebook equivocate more obvious questions: What does it tell us about Facebook that Nazis can proudly self-identify on their platform? Why can’t Facebook’s algorithms determine that words like “rape,” “bitch,” or “kill” aren’t valid occupational terms? Facebook says its AI
can detect hate speech
from users-so why, seemingly, did Facebook choose not point its AI at the ad utility?

Despite a user base of two billion people, Facebook as a company has very few human faces. There’s COO Sandberg, CEO Zuckerberg, and very few others. So when a company of this size-one this reliant on automation-makes as huge a mistake like embedding anti-semitism within its revenue schemes, there’s no one to blame. Even the apology is uncredited, with no human contact listed, save for the nameless
press@fb.com
boilerplate.

Advertisement

Zuckerberg and his cohorts made algorithmic decision-making the heart of its ad-targeting revenue scheme, and then enshrouded those systems in a black box. And as Facebook’s user base has grown, so have its blindspots.

Last year, lawyers
filed a class action suit against Facebook
over concerns that its ad-targeting scheme violated the Civil Rights Act. In addition to self-reported ad targeting, Facebook also compiled data to place users into categories they may not even be aware of. In October,
Pro Publica
revealed that, based on data like friend groups, location, likes, etc., Facebook
put users into categories
analogous to race, called an “ethnic affinity.”

Advertisers could then either
target or exclude users based on their affinity
, a grave concern in a country that outlaws denying people housing and employment based on their race. Facebook ended its “ethnic affinity” targeting after the backlash. Unlike with the “Jew Hater” debacle, where Facebook said it didn’t know what its algorithms were doing, here Facebook claimed it couldn’t foresee the disproportionate impact of its algorithms. Call that algorithmic idiocy.

Advertisement

Why do Facebook’s algorithms keep abetting racism? The more specific answer is hidden inside Facebook’s black box, but the broader answer may be: It’s profitable. Each Facebook user is a potential source of revenue for the company. And the more they use the site, the more ads they engage, the more shareable content they produce, and the more user insight they can generate for Facebook.

When users reveal themselves as racist, anti-semitic, and so on, what obligation does Facebook have to remove them or frustrate its own revenue structure? Does removing or censoring users violate their first amendment rights?

In both the original
Pro Publica
report and the follow-up from
Slate
, researchers have called for a public database of Facebook’s ad-targeting categories and a broader, de-automation push across the company. At this point, Facebook can no longer deny the sore need for an ethical and moralistic compass somewhere within its advertising business; the company’s algorithms and its racist and anti-semitic controversies are linked. It’s time for an enormous paradigmatic shift towards accountability, out in the open, and not another tepid half step from Facebook within the comfort of its black box.