Future Tense

Why Everyone Should Be Concerned About Parler Being Booted From the Internet

A smartphone shows the Parler app in Apple's App Store.
Hollie Adams/Getty Images

The real coordinated inauthentic behavior on social media made itself abundantly clear in the aftermath of the assault on Congress. The culprit isn’t a troll farm or Russian influence. This time, the coordinated inauthentic behavior is coming from California.

Late last week, Google and Apple both suspended Parler—the social media platform of choice for the alt-right—and demanded a “moderation improvement plan” from Parler. Amazon, as of midnight, also suspended Parler from its web hosting services, citing “inadequate content-moderation practices.” Okta, an identity management software company in San Francisco, was notified that Parler had a free trial of its product and subsequently rushed to terminate access.

Parler’s suspension should concern us all. I despise white supremacist content and its proliferation online, and the tweeted examples of comments posted on Parler are alarming and deeply unsettling to read. But as Kate Ruane, an attorney for the American Civil Liberties Union, said in a statement, it “should concern everyone when companies like Facebook and Twitter wield the unchecked power to remove people from platforms that have become indispensable for the speech of billions—especially when political realities make those decisions easier.”* There should be ways to bring accountability to platforms that host inciteful hate speech. Justice, however, is not achieved by endorsing other companies’ self-interests.

This moment constitutes a paradigm shift in how the internet is governed. Yes, we’ve seen web hosting services in the past pull their services: Cloudflare, for example, decided to deny service for 8chan, the social media site with ties to the horrific Christchurch, New Zealand, attacks and El Paso, Texas, shooting.* This latest move, however, shows how the largest platforms that provide online “public squares” and infrastructure are taking simultaneous actions to centralize their power on the internet, not only by setting bright lines for speech but also, as ACLU lawyer Ben Wizner said, taking away “the keys to the Internet.”

With the profoundly disturbing power imbalances that exist in who controls our online speech, who gets to demand a moderation improvement plan from Silicon Valley? Who demanded from Facebook a road map to moderation improvements after the company itself admitted that its own app was used to incite violence, and a genocide, in Myanmar? Last I checked, Google and Apple never chucked Facebook app downloads from their stores even though violence has most certainly been incited on Facebook time and time again. Tech platforms never rushed to block access to YouTube even after it was found that it helped radicalize the Christchurch shooter. Come to think of it—why wasn’t Twitter blocked from the Google Play Store or the App Store for allowing Trump to monopolize these radical sentiments for years until we reached this breaking point?

Yes, advertisers have prompted changes in how Big Tech thinks about content rules—YouTube, for example, revamped its extremism policies after ads from large corporations ran in front of unsavory extremist material, prompting an advertiser exodus. But do we really want the impetus to get social networks to think about what is right and fair in content moderation—already nearly impossible standards—and the subsequent impact on our speech online to be prompted by the carrot-and-stick of money and advertiser leverage?

What’s most frightening about the demands from Google and Apple is that we simply don’t know what’s next. They have said that Parler must have content moderation plans in place. Next time, what if Google and Apple respond to a platform that has content policies and a content moderation scheme in place—just not ones that they like? Can they push for more changes to be made, or even go so far as to require that other platforms’ policies mimic Google’s, Facebook’s, and Twitter’s rules? It’s only a matter of time before Big Tech is simply drawing the limits of permissible speech for other platforms, and if someone resists, then all Big Tech needs to do is pull them from the app store or deny them the requisite infrastructure to exist online.

Those on the left should also be deeply concerned about the absolutely alarming decision that just took place. Lest we forget, in the public outcry and debate over the WikiLeaks scandal, Apple made a decision to pull a WikiLeaks app from its App Store, and Amazon cut off its Amazon Web Services for the WikiLeaks website. In other words, a topic that also divided so many on ideological fault lines spurred companies to act quickly to protect their own self-interests based on the dominant public sentiment. In fact, the removal of WikiLeaks from these platforms followed a sequence of political pressure and delegitimization of WikiLeaks. At the height of the controversy, then–Vice President Joseph Biden himself referred to Julian Assange, the founder of WikiLeaks, as a “high-tech terrorist.” As Harvard Law professor Yochai Benkler wrote, “commercial owners of the critical infrastructures of the networked environment can deny service to controversial speakers, and some appear to be willing to do so at a mere whiff of public controversy.”

Some may think that all of this is proof that we need more government involvement in content moderation. But that might not be necessarily the best way to proceed either. Looking only at the United States, the uniquely American model of free speech would make any governmental attempt to regulate online speech incredibly difficult, if not outright illegal. Other countries have stepped into the content moderation regulation space to various degrees, though these attempts have often generated significant criticism from human rights organizations.

What we need is an actual content moderation improvement plan for all social media platforms, a plan that we, as users, can use to hold platforms accountable and to stop the largest, most powerful social networks from setting the terms of speech for everyone else. This may sound like an impossible, lofty goal, but most of us probably use a platform that is one of the largest experiments in democratized moderation on a daily basis: Wikipedia. Of course, Wikipedia is not perfect. But its global community of editors have the opportunity to debate fiercely and decide what information that makes it to a Wikipedia page is truthful, accurate, and able to be cited. This decentralized, democratized approach works, and others have called on similar approaches to be used for some moderation decisions in Big Tech as well. Perhaps then we could actually hold Facebook, YouTube, Twitter, and even Parler accountable for all the ways their products have been used to incite violence online and in the real world.

Correction, Jan. 11, 2021: This piece originally misspelled Cloudflare and Kate Ruane’s last name.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.