Combatting Mis- and Disinformation while Maintaining Free Speech

“Data Center” by Bob Mical is licensed under CC BY-NC 2.0

The dream of social media was one of “democratization”. Finally, the grasp of “elite” newspaper editors and TV anchors on what information people received would be weakened. Average citizens would get to speak freely. Dictators, unable to control social media platforms, would fall as the truth spread. Protests organized and broadcast largely on social media, such as those of the Arab Spring, appeared to bear this vision out. 10 years later, it has become clear that it is not truth that spreads on social media, but lies — and the consequences for democracy and good governance are grave, including in established democracies like the United States.

A set of responses to the spread of false information is beginning to take shape among social media companies: content warnings on dis/misinformation, banning of accounts that spread it, and removal of posts deemed offensive. While it is a good thing that social media companies are finally acting to counter some of the harm they have caused (largely via sharing algorithms — more on that later), these responses are both insufficient and disturbing.

They are insufficient because dis/misinformation continues to spread and because its spreaders and audience can change platforms to avoid restrictions. They are disturbing because concerns (voiced mostly on the right and the far left) about granting Silicon Valley billionaires control over what people write, say, read, and see and what counts as acceptable speech are entirely legitimate. The firms those billionaires control play a vast role in society and discourse. The current approach represents undemocratic, private-sector near-monopolies granting themselves immense power over speech. Is there an alternative?

Warnings that social media posts contain disputed information are a good start and should remain. They may help users filter information and encourage them to take a closer look at posts before they share and spread them. But one only has to look at the history of warnings on cigarette packaging to know they are not enough. Social media users are in a sense addicted to sensationalism, conspiracy theories, and extremist ideologies. What really kills addiction, whether to smoking or disinformation, is inconvenience not knowledge.

Making disinformation inconvenient mainly means not making it so darn convenient. Social media bosses long ago realized that posts that evoked emotions spread more quickly and made their platforms more addictive. Unfortunately, outrage is one of the best emotions for guaranteeing an audience. What’s more, truth spreads at just one-sixth the speed of lies on Twitter (and the disparity likely holds on other platforms as well). More likes/shares/retweets/etc. mean more eyeballs on screens and thus on advertising. Breaking this feedback loop is the key — but competition between social media platforms for engaging content provides a strong disincentive to change how this works. Industry-wide rules could level the playing field and make it easier for the firms to explore different revenue models.

What would such rules look like? We might adopt a version of the “Fairness Doctrine” that ones guided media content in the Unites States. It asserted that broadcasters and journalists needed to devote time/page space to multiple viewpoints and ensure balanced reporting. Such a doctrine, modified for and applied to large social media platforms, would not violate a right to free speech. For one thing, a right to speak is not a right to an audience. For another, such a doctrine provides guidelines — not a list of taboo topics. The focus would be less on policing individual posts for offensive content. Instead, social media platforms would tweak algorithms’ default settings to ensure users were seeing varying views in their feeds, not just those views they normally agreed with. Divisive content and above all dis/misinformation, once identified via AI and/or human moderators, would be seen only if sent directly to another person (and even then with content warnings).

There is no reason truth cannot spread faster and farther than fiction online if the right algorithms are implemented based on the right legal guidelines. Competition between platforms would continue, but not based on sensationalism. And competition between platforms is a good thing: It ensures consumer choice. In that spirit, these rules should apply only once platforms have passed a certain user threshold (and thus have gained a certain hold on attention spans). The existence of smaller platforms itself adds balance to the mix and if the above rules were applied to them, they would act as a barrier to entry that would only boost the likes of Facebook and Twitter.

What are the odds of such sensible rules becoming law? The outlook isn’t great, but it may be improving in light of recent events and criticisms of social media bias. We may have a short window during which to intervene to protect truth, free speech, and competition online. We must seize it. 

Comments

Popular Posts