Twitter Preemptively Debunking Misinformation [Content Made Simple]
Issue #195: The three stages of the social internet, news on Facebook's Oversight Board, and more.
TWITTER LAUNCHES PRE-BUNKS AHEAD OF U.S. PRESIDENTIAL ELECTION
This new feature is an attempt to curtail misinformation, especially as it relates to the election, but will likely be met with some opposition.
Twitter said Monday it would begin placing messages at the top of users’ feeds to pre-emptively debunk false information about voting by mail and election results, an escalation of the tech company’s battle against misinformation.
Twitter is calling the messages a “pre-bunk,” something it says it has never done, because the debunking is not a reaction to an existing tweet with misinformation.
“Election experts confirm that voting by mail is safe and secure, even with an increase in mail-in ballots,” one of the messages scheduled to go live Monday reads. “Even so, you might encounter unconfirmed claims that voting by mail leads to election fraud ahead of the 2020 US elections.” The message has a button to lead users to more information.
A second message, also in the prime location at the top of users’ feeds, is scheduled to go live Wednesday and address misinformation about the timing of election results, the company said in a statement.
I applaud any efforts put forth by social media companies in their attempt to thwart misinformation. I am convinced that the spread of misinformation on social media is one of the more dire threats posed by the medium. At the same time, social media platforms have fumbled this responsibility many times throughout their collective history, and users are right to be skeptical about possible bias from these platforms—they are run by humans with opinions, after all! All of that said, I am glad for the effort and I hope this works to help users.
ON THE POD
The U.S. presidential election is next week, and social media companies like Twitter and Facebook have begun to deploy features and tools to combat the onslaught of misinformation that has flowed already and will continue through the month of November. We talk about it.
HITTING THE LINKS
While we’re talking about content moderation on social media platforms, this is some news.
The Oversight Board is a global body that will make independent decisions on whether specific content should be allowed or removed from Facebook and Instagram. Board Members are drawn from around the world with backgrounds in free expression, digital rights, online safety and other related fields.
"The Board is eager to get to work," said Catalina Botero-Marino, Co-Chair of the Oversight Board. "We won't be able to hear every appeal, but want our decisions to have the widest possible value, and will be prioritizing cases that have the potential to impact many users around the world, are of critical importance to public discourse, and raise questions about Facebook's policies."
I’ve noticed this movement online for some time, and I think it’s worth our attention.
Stage 3 ought to concern us. I say this even as I find myself in more of a Stage 3 mentality. In Stage 2 there is intense conflict, but people are at least talking with each other and engaging those with whom they disagree. People live in different worlds, engineered by the algorithms that deliver them personalized content, but they are working it out. Stage 3 concerns me because in this stage we all simply retreat into our own worlds and stop engaging with people who live in different ideological spaces. On the surface, this is nice, because it produces less conflict and some of the tension that characterizes social media today would be released. But it concerns me because when we retreat into our own little ideological worlds without engaging with others we may avoid conflict in the near-term, but we may be setting ourselves up for a greater conflict in the long-term.
Every social media platform creates an environment conducive to ideological echo chambers and polarization. Facebook seems to do this more than others. This article explains how.
When Mark Zuckerberg appears at a Senate committee hearing on Wednesday, he will no doubt be asked about Facebook’s content moderation policies. He may face important questions about how Facebook decides when content is not merely misleading, but outright false and harmful. An equally important question lawmakers may not ask — but should — is how Facebook’s algorithms shape the news and information that users consume.
Zuckerberg has recently denied that Facebook is a right-wing echo chamber. Instead, he claimed that the content that gets the most visibility on Facebook is “the same things that people talk about in the mainstream.” But Zuckerberg also admits that “it’s true that partisan content” attracts more likes, comments and shares. Those are precisely the signals that lead Facebook algorithms to push polarizing partisan content into people’s Facebook feeds.
THE FUNNY PART
If you like this, you should subscribe to my free newsletter of funny content I find online. It’s called The Funnies. It delivers on Saturday mornings.
You can subscribe to The Funnies here. (It is and will always be free.)