Has Social Media Made Progress on Misinformation Since 2016?

Tim Cross 11 November, 2020 

The 2016 US presidential election shone a light on the damaging ways social media can be exploited to disrupt the democratic process. The Cambridge Analytica scandal exposed vulnerabilities in Facebook’s protection of user data, and how that data could be exploited for hyper-targeted political advertising. And an Special Counsel Robert Mueller’s investigation into Russian interference demonstrated how social media platforms were used to spread misinformation and to swing the election in Donald Trump’s favour.

In the run up to this year’s election, we’ve seen social platforms take a much more active role in censoring content and preventing the spread of misinformation than in 2016. Platforms like Facebook, Twitter and YouTube have been actively flagging misleading content, and banning users who spread disinformation. And several social platforms have updated their policies on political advertising, making it more clear who has paid for political ads, and blocking new political ads from being run in the week before the election.

Discussions around social media have become heavily politicised. Many Republicans have accused companies like Twitter of censoring conservative voices and opinions, while many Democrats have said platforms like Facebook are fostering hate speech through their inaction. But have the platforms made meaningful progress in combatting misinformation and foreign interference since 2016? VAN spoke with experts to find out.

Nell Greenberg, Campaign Director at Avaaz
The bottom line is Facebook’s overall efforts fell far short of what would have been required to protect the integrity of the US election.

This was an election dominated by disinformation, much of which was made viral on Facebook. According to polling conducted by SurveyUSA and commissioned by Avaaz, 65 percent of registered voters reported seeing political disinformation in their Facebook feeds. Lies about political candidates, false claims of voter fraud, rumours of early victory and election-stealing, as well as content that glorified violence and ‘coups’ defined the conversation for millions of American voters.

Facebook did take some essential steps to stem disinformation, which were improvements from 2016, including partnering with fact-checkers, labelling some misinformation, banning new political ads just ahead of the election, and removing some of the violence-inducing conspiracy networks from its platforms. However, on the whole the company’s actions were too slow, too small and too secretive.

One huge concern is that the actions the company did take were a result of massive scrutiny from civil society, researchers and the media amidst one of the highest profile elections in the world. What about elections in every other country where Facebook plays a massive role in shaping the political ecosystem? The platform must quickly escalate its efforts globally, not just where it’s politically pushed to do so. Platforms need to urgently and transparently issue corrections to all those exposed to misinformation and transparently downrank systematic misinformers from their platforms. Plus, they need to impose permanent solutions to this misinformation crisis, not apply temporary band-aids. We must also urgently pass regulation that ensures tech giants, such as Facebook, put measures in place to defeat disinformation and protect democracy

Dipayan Ghosh, Director of the Platform Accountability Project at the Harvard Kennedy School
The disinformation problem has manifested itself in different form and function this year as compared to 2016 — and regrettably, it is difficult to defend the notion that the tech industry has gotten better at addressing disinformation operations. Instead, while social media firms have apparently resolved certain problems that plagued us four years ago, a slate of new ones has emerged and had an apparent media impact in recent days.

A failure to make much-needed assertions about political falsehoods, the growth of groups designed to “stop the steal,” and the spread of Spanish-langauge disinformation have all plagued the democratic discourse during this election cycle. This demands a public reckoning and resulting action from the platforms.

Chris Meserole, Deputy Director of the Brookings Artificial Intelligence and Emerging Technology Initiative
Social media have meaningfully improved since the last presidential election. This go around, Twitter, Facebook, and YouTube were all far more aggressive in identifying and labeling misleading information about the election and candidates than they were in the past.

Yet there is still far more they can do. The fact that it took so long to de-platform QAnon on Facebook, for instance, or that YouTube still struggled with rampant misinformation on livestreams during the lead-up to the election reveals just how pervasive the problem is—and how much more proactive the major platforms still need to be.

2020-12-22T13:03:37+01:00

About the Author:

Tim Cross is Assistant Editor at VideoWeek.
Go to Top