How Social Media Helps Us Fall for Bad Ideas
I used to imagine the marketplace of ideas as an arena where everyone got to throw in their ideas. And the best ideas would win out based purely on their own merits.
Of course, like most things involving human beings, the reality is more complex.
Access to contribute to the marketplace of ideas is not equal. People who are wealthy or powerful or famous have an easier time being heard. Even middle-class people who aren’t wealthy have an easier time getting their ideas out there. Recording a podcast, putting together a blog post, self-publishing a book … these are all activities that take resources and time.
Ideas don’t compete in the marketplace of ideas solely on their merits. We too readily accept ideas that come from within our social groups. Did you see a video showing that Biden played an NWA rap song at campaign event? Fake but widely shared. Have you seen a picture of Trump’s parents wearing KKK uniforms? Fake but widely shared.
We too easily reject ideas that come from outside our social groups. When we are presented with evidence that conflicts with ideas we already hold, we often choose to ignore the evidence rather than reevaluate our beliefs.
Adding strong emotions to the mix skews our ability to evaluate ideas even more. Traumatic events in our lives can create cognitive openings where we become vulnerable to radicalized ideologies.
When we are sad and fearful, we are more willing accept conspiracy theories that offer us answers.
Easily disproved conspiracy theories like QAnon are even a threat to established religions. When Pastor Clark Frailey asked other Oklahoma Baptist pastors if they were losing constituents to QAnon, the answer was yes. “Frailey had shut down in-person services in March to help prevent the spread of the virus. Without these gatherings, some of his churchgoers had turned instead to Facebook, podcasts, and viral memes for guidance.”
Online hate speech has offline consequences. Did you know that Facebook was used to fuel genocide in Myanmar in 2016 and 2017?
When we feel isolated, our desire to belong can lead us to accept dangerous ideologies.
Like Corinna Olsen who, feeling unmoored after her brother drowned, discovered racist online communities. Hate can bond people: “She wasn’t always sure that she believed what she said when she echoed her new friends’ views, but what mattered was that they wanted to keep talking to her …”
Our desire for belonging, our resistance to ideas outside our social groups, these tendencies are not new. These are very human flaws that have been around ever since people gathered.
What has changed the landscape in the past decade or so is social media.
Facebook, Twitter, Instagram, Pinterest, YouTube … all of these services know they are competing for your time and attention. The more time you spend on their platform, the more ads they can sell. So a lot of very smart people spend a lot of time thinking about how they can keep users engaged on their platform.
Most of these services don’t show you a chronological timeline of everything your friends have posted (unless you dig a bit). They try to show you the most engaging and interesting posts. And since there are too many posts to review manually, algorithms decide which posts are the most engaging.
An engaging post provokes a strong reaction: hate, anger, love fear. In March 2019, NewsWhip found that Facebook’s algorithm to promote meaningful engagement had instead “pushed up articles on divisive topics like abortion, religion, and guns …”
Let’s say you’ve watched a video on YouTube from a right-leaning commentator. Then YouTube might recommend another commentator with similar, but slightly more extreme beliefs. And before you know it, you’ve fallen down a rabbit hole into homophobia and white supremacy.
Or maybe you clicked on a “news” article in Facebook about how coronavirus deaths have been inflated, and a few clicks later you’re reading articles claiming drinking methanol will cure coronavirus.
Another ominous technical trend that plays into social media radicalization is advancements in audio and video editing.
We all know you can manipulate an image. But new technology has made it easier to create convincing fake videos. In 2018, Jordan Peele demonstrated how he could put words in Barack Obama’s mouth. In 2019, someone was able to morph Bill Hader’s face into Tom Cruise and Seth Rogan, all in one two-minute clip. These deep fakes are only going to get better as the tools improve.
Add to this environment one more trend preying on social media, other governments.
Russian intelligence services have created websites to spread false stories about coronavirus and “chaos in blue cities”. Chinese government disinformation campaigns have used fake social media accounts to spread false stories that “health-care workers in Europe left sick people to die and that President Donald J. Trump planned to lock down the entire United States …” Saudi Arabia used a network of fake accounts to spread messages denying involvement in the killing of journalist Jamal Khashoggi.
These governments (and others) are following a common playbook: maintain networks of social media accounts and websites that pretend to be news, then use those social media accounts to amplify false or misleading stories.
That anti-mask meme you just saw on Facebook might have been created by a government agent hostile to the United States.
All of this put together means in 2020 we have a perfect storm. Deep fakes make it easier to propaganda to appear real. Motivated, persistent government actors push fake or misleading stories as part of coordinated campaigns. And engagement algorithms take all of that information and end up pushing sometimes vulnerable humans closer to ideas that kill people.
Unfortunately, there’s no technological silver bullet that will solve disinformation on social media. Even for companies that are trying.
We need to consume social media with an open mind but not an empty head.
We need to critically examine not just the picture or video or meme or article, but the source of that information. Is that source trustworthy? Can you find confirmation of that story on another source that is?
An idea that challenges our preconceptions is not necessarily wrong. An idea that confirms what we believe isn’t necessarily right. We need to be aware of how our emotional state might lead us to accept ideas we wouldn’t otherwise consider.
Algorithms will not save us. We have to save ourselves.