A departing Facebook employee said ‘omniscient robots owned by Mark Zuckerberg’ won’t save the company from its moderation problems

Mark zuckerberg facebook
Mark Zuckerberg at the 56th Munich Security Conference in February 2020. Sven Hoppe/picture alliance via Getty Images
  • An employee said Facebook wants to use automated systems to solve “human problems.”
  • That reality is dystopian, the employee wrote in a farewell “badge post” in 2020.
  • Facebook relies heavily on AI to moderate content and to decide what posts to put in front of users.

Upon leaving Facebook in late 2020, an employee warned his former colleagues that the company’s reliance on AI won’t save it from pressing issues.

In the farewell note, called a “badge post” internally at Facebook, the employee said the company wants to oversee all human discourse using “perfect, fair, omniscient robots owned by Mark Zuckerberg.”

They went on to say that reality is “clearly a dystopia, but it is also so deeply ingrained we hardly notice it anymore.” They also said the more Facebook tries to solve human problems with engineering solutions, that future grows nearer.

“We need more radical humanists to interrogate these assumptions, explain why life can’t be sanitized, and disrupt further attempts to centralize power,” the employee wrote.

AI facebook papers haugen
This screenshot shows a photo of a computer screen with the employee’s ‘badge post’ displayed. Frances Haugen

Photos of the posts were included in disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by whistleblower Frances Haugen‘s legal counsel. A consortium of news organizations, including The Atlantic and The New York Times, obtained the redacted versions received by Congress, now known as the “Facebook Papers. Insider has also since obtained copies.

Facebook representatives have said the documents do not paint the entire picture of the company’s business development investments, internal research, and efforts to mitigate harm.

Facebook uses a mix of both human and automated systems to sift through content on its social platform that could be harmful. It also uses computers to help decide what kinds of content to show users. One such downfall of that decision though has been computer-driven systems promoting angry, divisive, sensationalistic posts that contain misinformation, the Facebook Papers have shown.

Facebook’s reliance on AI was a central focus of Haugen’s testimony in front of a Congressional committee earlier this month.

“I strongly encourage reforms that push us human-scale social media and not computer-driven social media,” she told lawmakers. “Those amplification harms are caused by computers choosing what’s important to us, not friends and family.”