Tech companies have learned they have a responsibility to weed out extremism

In the aftermath of last year’s presidential election, tech execs were unanimous in denying that their various platforms had anything to do with the spread of falsehoods, fake news, and outright lies that allegedly contributed to President Trump’s unexpected election.

“Personally, I think the idea that fake news on Facebook, it’s a very small amount of the content, influenced the election in any way is a pretty crazy idea,” Facebook CEO Mark Zuckerberg said less than a week after the election.

How wrong he was.

Even after it was apparent that the spread of fake news did play a role in the outcome of the election, the tech world wasn’t sure it could or should stop it. One tech executive told me last December that so much content is uploaded to various tech platforms every day that properly policing all of it would be impossible. Plus, there are all those free speech issues the tech companies would have to navigate. There was just no way to fix the problem, the tech industry said.

Keep in mind these statements were coming out of Silicon Valley, where, supposedly, the greatest technological and entrepreneurial minds of our time love to brag about their intelligence and how great they are at solving problems. Yet they failed to see the responsibility that comes with managing the various services that have become the dominant form of news consumption for many people.

Now, nine months later, you’re hearing a strikingly different tone from the tech world. Apparently, they draw the line at Nazis.

Following the horrific events in Charlottesville last weekend, a slew of tech companies both large and small made moves to eliminate the spread of hate, anti-Semitism, white supremacy, Nazism, and all sorts of other nasty things from their various platforms.

Let’s recap:

  • Apple and PayPal stopped supporting payments on sites that sell white supremacist merchandise.
  • GoDaddy and Google canceled the domain registration for the extremist site Daily Stormer. The moves followed Daily Stormer’s decision to publish a horrible story about Heather Heyer, the woman killed during the Charlottesville protests.
  • Cloudflare dropped Daily Stormer as a customer, ending the protection it provided to the site against denial-of-service (DDoS) attacks.
  • Facebook removed links to hateful articles about Heather Heyer in the News Feed.
  • The chat app Discord shut down some of the servers that white supremacists used to organise the protests in Charlottesville.
  • Airbnb reiterated it wouldn’t allow white supremacists to use its app to organise lodging for protests.
  • Spotify removed “hate bands” from its music library.

On top of all that, there was a chorus of tech leaders, including Apple CEO Tim Cook and Microsoft CEO Satya Nadella, who denounced the events in Charlottesville and called out Trump for his misleading “many sides” response.

What a difference a few months of a heated culture war and the killing of an innocent woman can make in the tech world’s perspective.

I think tech companies have finally realised a certain responsibility comes with owning the platforms and services that deliver news, information, and entertainment to billions every day. Just like editors of a newspaper have to carefully vet content for truth, platforms also have to discover ways to edit the information that’s posted on their sites — but on a massive scale.

It’s a welcome change of heart. Better late than never.

But it’s not ideal, either. Some have rightfully raised concerns that the kind of policing we’ve seen over the last week could lead to a slippery slope where content and users are booted off certain platforms on a whim, without some kind of process. The Electronic Frontier Foundation (EFF) published a great recap of those arguments this week, saying the moves tech companies used to silence white supremacists could “soon be used to silence others.”

While tech companies have a right to control their platforms, doing so could lead us to a place where their responses are influenced by headlines or public outcry rather than being the result of a deliberate process. Even Cloudflare CEO Matthew Prince admitted his decision to stop working with Daily Stormer was emotionally charged; he dropped the site as a customer, because he was in a “bad mood.”

That’s fine when you’re leaving Nazis in the dust, but it could set a harmful precedent if it encourages other tech CEOs to drop customers whose political views they disagree with. Even if their hearts are in the right place, jumping on the bandwagon isn’t always the right answer. (OkCupid learned that Thursday when it kicked a white supremacist off its dating service.)

Tech companies are going to have to strike a delicate balance, and I don’t think they will figure it out right away. Finding that balance is going to take months of experiments, screwups, and horror stories before the tech companies put the right plans in place. EFF’s recommendation that tech firms “have a process, don’t act on headlines” feels like a good start.

In the meantime, it’s going to be messy. For the first time in human history, the entire world is connected in a way that lets people bypass the traditional — and typically carefully curated — sources of news and information. On top of that, there are now apps that make it easy for bad actors to organise.

And tech companies will likely be facing a constant game of cat-and-mouse. Boot the fake news purveyors and supremacists from one service, and they will find another or even build their own.

But the good news is that bad actors like those are finally getting kicked off the services that matter.

Get the latest Google stock price here.