Social media platforms are not only creating echo chambers, propagating falsehoods, and facilitating the circulation of extremist ideas. Previous media innovations, dating back at least to the printing press, did that, too, but none of them shook the very foundations of human communication and social interaction.

Not only are billions of people around the world glued to their mobile phones, but the information they consume has changed dramatically - and not for the better. On dominant social-media platforms like Facebook, researchers have documented that falsehoods spread faster and more widely than similar content that includes accurate information. Though users are not demanding misinformation, the algorithms that determine what people see tend to favor sensational, inaccurate, and misleading content, because that is what generates "engagement" and thus advertising revenue.

As the internet activist Eli Pariser noted in 2011, Facebook also creates filter bubbles, whereby individuals are more likely to be presented with content that reinforces their own ideological leanings and confirms their own biases. And more recent research has demonstrated that this process has a major influence on the type of information users see.

Even leaving aside Facebook's algorithmic choices, the broader social-media ecosystem allows people to find subcommunities that align with their interests. This is not necessarily a bad thing. If you are the only person in your community with an interest in ornithology, you no longer have to be alone, because you can now connect with ornithology enthusiasts from around the world. But, of course, the same applies to the lone extremist who can now use the same platforms to access or propagate hate speech and conspiracy theories.

No one disputes that social-media platforms have been a major conduit for hate speech, disinformation, and propaganda. Reddit and YouTube are breeding grounds for right-wing extremism. The Oath Keepers used Facebook, especially, to organize their role in the January 6, 2021, attack on the United States Capitol. Former US President Donald Trump's anti-Muslim tweets were found to have fueled violence against minorities in the US.

True, some find such observations alarmist, noting that large players like Facebook and YouTube (which is owned by Google/Alphabet) do much more to police hate speech and misinformation than their smaller rivals do, especially now that better moderation practices have been developed. Moreover, other researchers have challenged the finding that falsehoods spread faster on Facebook and Twitter, at least when compared to other media.

Still others argue that even if the current social-media environment is treacherous, the problem is transitory. After all, novel communication tools have always been misused. Martin Luther used the printing press to promote not just Protestantism but also virulent anti-Semitism. Radio proved to be a powerful tool in the hands of demagogues like Father Charles Coughlin in the US and the Nazis in Germany. Both print and broadcast outlets remain full of misinformation to this day, but society has adjusted to these media and managed to contain their negative effects.

This argument implies that a combination of stronger regulation and other new technologies can overcome the challenges posed by social media. For example, platforms could provide better information about the provenance of articles; or the same platforms could be discouraged from algorithmically boosting items that might be incendiary or contain misinformation.

But such measures fail to address the depth of the problem. Social media is not only creating echo chambers, propagating falsehoods, and facilitating the circulation of extremist ideas. It also may be shaking the very foundations of human communication and social cohesion, by substituting artificial social networks for real ones.

We are distinguished from other animals mostly by our advanced ability to learn from our community, and to accumulate expertise by observing others. Our most profound ideas and cherished notions come not in isolation or from reading books, but by being embedded in a social milieu and interacting through argumentation, education, performance, and so forth. Trusted sources play an indispensable role in this process, which is why leaders and those with bully pulpits can have such outsize effects. Earlier media innovations capitalized on this, yet none of them modified the very nature of human networks the way that social media have.

What happens when platforms such as Facebook, Twitter, or Reddit start manipulating what we perceive as our social network? The worrying truth is that nobody knows. And though we could eventually adapt to this change and find ways to neutralize its most pernicious effects, that isn't an outcome that we should count on, given the direction the industry has been heading.

Social media's most corrosive effects are starting to look exactly like what the cultural critic Neil Postman anticipated almost four decades ago in his landmark book Amusing Ourselves to Death. "Americans no longer talk to each other, they entertain each other," he observed. "They do not exchange ideas, they exchange images. They do not argue with propositions; they argue with good looks, celebrities, and commercials."

Comparing George Orwell's 1984 to Aldous Huxley's Brave New World, Postman then added that, "What Orwell feared were those who would ban books. What Huxley feared was that there would be no reason to ban a book, for there would be no one who wanted to read one. Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egoism. Orwell feared that the truth would be concealed from us. Huxley feared that the truth would be drowned in a sea of irrelevance."

Whereas Postman was worried more about a Huxleyan future than an Orwellian one, social media have been ushering in both at the same time. While governments acquire the means both to manipulate our perceptions of reality and to reduce us to passivity and egoism, our virtual "friends" are increasingly policing our thoughts. One now must continuously signal one's virtue and call out people who deviate from prevailing orthodoxy. But "virtue" is whatever one's artificial online social circle says it is; and in many cases, it is based entirely on lies.

Hannah Arendt, another prescient twentieth-century thinker, warned about where this can lead. "If everybody always lies to you, the consequence is not that you believe the lies, but rather that nobody believes anything any longer." At that point, social and political life become impossible.

From Project Syndicate

Leave a Comment

Recent Posts