Give Everyone a Vote on Kicking Politicians Off Social Media

Give Everyone a Vote on Kicking Politicians Off Social Media

In classical Greece, freeborn citizens wanting to shop, speak, or argue went to a single shared space in their city, the agora, or public square.

The most famous agora was in Athens, a city often brought up as the foundational model for democratic societies. But how the Athenians actually ran their agora is a surprisingly salutary model for today’s social media—as well as a warning.

Once a year, the voters of Athens (exclusively, of course, locally born men) were asked whom they wanted kicked out of the city. They gathered in the agora, and anyone who received sufficient votes—more than 6,000, out of a voting population of perhaps 30,000—was expelled. No trial was held, and no defense was mounted. The accused had to stay out of the city for 10 years—or be killed.

This process was called ostracism. Granted, it sometimes was for quite petty reasons. Plutarch’s Lives lists the textbook example of Aristides, known as “the Just,” whom one man wanted to ostracize for no other reason than being tired of hearing Aristides being called “the Just.” Since the man was illiterate, Aristides wrote his own name for him.

It was an imperfect process then—but also a well-established system for getting rid of potential tyrants and of rich and powerful men who had become truly obnoxious. The Athenian exercise grappled with Popper’s paradox: the idea that no tolerant society can practically remain tolerant without limit. Athenians understood the principle well enough to have a form of moderation—albeit a brutal and sketchy one—baked right into their policy. And they understood implementation well enough to apply it equally on all who could participate, regardless of how powerful or revered they might have been.

It’s a pity that the agora didn’t last long because there is much we could have learned from its evolution. At some point, regardless of what type of government is in place, inviting a million people into a single space to shout over each other just isn’t practical. Even with democracy—and the Gutenberg press and the telegraph and the telephone and the newspaper—it was impossible to replicate what the Athenians had pulled off.

The early days of the internet merely created another few thousand scattered communities. They were spread across various forums, listservs, chat sites, and the like. The agora, where a completely random citizen could legitimately threaten the existence of Aristides the Just, seemed to be a thing of the past.

But times change. Facebook (2.8 billion monthly users) and Twitter (330 million) emerged, as did YouTube, Instagram, and the rest of the platforms that today shape the internet and political discourse within it. And in doing so they seem to have brought back the agora, not as a truly public space but as interconnected, privately owned, privately controlled, walled gardens so large that we often forget the walls exist. And on Jan. 21, the owners of the largest of these walled gardens announced the indefinite suspension of Donald Trump—while he was still president of the United States.

As a result, Republicans who once claimed to support leaner, more minarchist government now have begun to parrot the opposing view: that a corporation should not have the right to determine speech within the spaces they host and that these tech companies must be regulated by the government. Certainly it seems a bit ridiculous: Presidents, politicians, and pundits have historically not used or needed to use Twitter or Facebook to exercise their power. But they do today. A new agora has given space to demagogues—and power to its unelected gatekeepers.

The walled gardens of Facebook, Twitter, and Google are a little more fragmented than the Athenian original, with algorithmic bias creating little pocket networks of groupthink, but each of these platforms hosts a population larger than any ancient Greek city. They even have the trappings of public assembly rules—strongly worded lines about what will and will not be allowed in here, report buttons, and armies of content moderators willing to pounce or, in some cases, remove people.

Once in, there are more rules. Twitter, for example, does not want you engaging in any hate speech, targeted harassment, or threats of violence. Ditto with Facebook, which supposedly hates misinformation and has extraordinarily detailed, tiered rules on what constitutes hate speech and what doesn’t. The implication is that these rules are impartial and apply to all.

Unfortunately, in reality, that’s never the case. The powerful get away with bigotry, incitement to violence, and harassment that would get a nobody kicked off. This is what made the banning of Trump special: It is the rare moment of the rules being enforced on a notable member of government.

It’s a normal weekday when social media companies work as government flunkies. Twitter India, for example, is right now banning hundreds of accounts speaking out against India’s government—including a brief block of the media powerhouse Caravan, noted for its anti-casteist views. Facebook stands accused of allowing India’s Hindu nationalist government to spread hate unchecked and shielding anti-Muslim posts from top officials of the ruling Bharatiya Janata Party. The United Nations explicitly named Facebook as being responsible for the spread of hate speech in Myanmar. In Sri Lanka, musicians-turned-propaganda artists peddle both transphobia and anti-Muslim race-baiting while pro-establishment Buddhist monks threaten race war on video. China’s “Wolf Warrior” Twitter diplomacy is allowed free rein. Misinformation from politicians and related agents has forced us to create fact-checkers and alternate routes of communication to avoid would-be Big Brothers.

Yet no earth-shaking ostracisms ever happen. As long as said people seem to have links within political establishments, they are left unchecked. Only when actors stray far out of political favor do these rules magically begin to exist again, such as when Gavin McInnes, the founder of the Proud Boys, was kicked off Twitter, Facebook, and Instagram in 2018. The banning of Trump is no different: Action was only taken after it was clear that a new administration would come into play.

This is due to a very simple reason: The actual internal policy of these platforms is a Singaporean-style “Don’t rock the boat” mantra but one they don’t have the guts to own. It is neither a free speech absolutist position nor a carefully ordered garden. It is a jury-rigged system by companies keen on keeping the political freedom that lets them make money, with no intellectual consistency, occasionally dotted with some performative action. Well-spoken policy managers very politely turn the Arab Spring, #MeToo movement, and queer activism into supposed ethical dilemmas, setting up the idea that to implement their own rules would be to silence critical voices. Finally, they have the classic deflector: We really care about what you have to say. Unspoken is the true ending: but not enough to do something about it.

Facebook, Twitter, Google et al. have a choice to make. On one hand, they can choose to be true agoras, with occasional ostracism of even the most powerful. Or they can commit to spending more resources on implementing their own rules equally for all.

The processes are certainly simple enough to implement. “Who among your social network would you like to see evicted?” can be asked with tools both direct (surveys) or indirect (report buttons stacking items onto a queue). Set the threshold high enough, and it can become a true political tool, not just a vehicle for personal vendettas. Yet it will lead to abuse. Women will be disproportionately targeted because of the rage fuel of misogyny. Leaders who trouble people’s unearned comfort will be kicked off alongside hatemongers and demagogues. But at least these digital spaces will then be as truly open as some people think they should be. There will be more Gamergates, more state-sponsored doxxing, more trash-talking; Aristides the Just will be cast out by envious illiterates; and while it is highly unlikely that terrorist attacks might be coordinated over statuses, we may expect more Christchurch-like horrors. This is one option: absolute freedom.

The other choice involves work. Three years ago, in a policy paper, I argued that the nature and structure of languages made it extraordinarily difficult even for the Facebooks and Googles of the world to totally comply with their own content policies and pull off this second option. I argued that a more productive model would be for them to share language data with native fact-checkers and linguists for each language they contained on their platform and thus keep up an ever evolving understanding of the global dynamics of hate speech and misinformation.

Since then, they have demonstrated they do have some capacity—especially with the fact-checking and marking of posts as containing contested claims. It began with COVID-19 misinformation and rapidly spread to U.S. election news. Accompanying it were measures of reducing algorithmic reach on posts that contained certain types of content. And while not perfect, and while not a very participatory model, these actions illustrate the actual power that these companies have, not just with their money but with the data and computational resources that they possess.

Both options involve having the backbone to actually commit to what it says on the tin and commit to an actual position. The present intellectual cowardice deals actual damage to society. And assuming that those who run these platforms are only interested in self-gain: It loses them their userbases. Free speech absolutists leave in search of spaces fit for them, and the unfairly oppressed, and those who see no justice done, leave or build their own communities. Parler might be a poorly coded joke, but the growth of Signal and Telegram is an indicator that people on all sides of this debate have seen the faux neutrality for what it is and are on their way out.

And what if platforms refuse to make a choice, you ask?

Across the world, government figures are using both innovations in disinformation and panic to control the digital sphere. Civil society organizations have begged Facebook to actually implement its community policies.

By wavering between carrying out their own policies and pandering to whoever seems to be most in power, platforms have sided not with their own policies or even national laws but with wannabe dictators and petty media moguls and pseudoscientific race-baiting celebrities across the world. And in doing so they have turned both their professed missions and their lovingly public relations-friendly policies into complete and utter shams. The agora may have been reconstructed, but none of its lessons have been learned.

By Staff Writer