The European Commission today proposed an EU-wide ‘Code of Practice on Disinformation’ to tackle fake news, suggesting that social media sites and other news aggregators may face regulation in they fail to raise their standards.
The new code of practice would be established by online platforms themselves rather than by the EU, but the Commission outlined specific aims for what this code should achieve, and made it clear that failure to make substantial progress towards these goals may lead to greater intervention from the Union.
The first of five aims outlined is to ensure greater transparency around sponsored content, and in particular advertising, as well as to restrict targeting options for political advertising and to reduce revenues for “purveyors of disinformation”.
Facebook, which was specifically named by the Commission’s announcement as a cause for concern after the Cambridge Analytica scandal, has taken steps along these lines already. The company announced earlier this month that it will begin clearly labelling political ads, and providing information on which individuals or groups paid for them. However the Commission’s announcement did not comment on these actions, so it’s unclear whether it would judge them to constitute substantial progress.
The EC would also like to see changes made to algorithms for surfacing news on online platforms. It called for greater clarity about the functioning of algorithms and the enablement of third party verification, as well as for online platforms to make it easier for users to discover and access different news sources representing alternative viewpoints.
An accompanying study on fake news and disinformation released by the European Commission’s Joint Research Centre today argued that the shift from direct access to newspapers to indirect algorithm-driven distribution of news has led to a reduction in quality. The report said this shift may contribute to a news market failure, significant wording since correcting market failures is part of the EU’s remit.
Alongside these changes, the new code would also aim to require measures to be taken to close fake accounts and tackle bots, and to enable fact checkers, researchers and public authorities to continuously monitor online disinformation.
The Commission proposed that such a code should be developed by July, and should be seen as a first step. Commissioner Mariya Gabriel, commissioner for digital economy and society, suggested that a failure to do so may warrant further action from the EU.
“We are calling on all actors, in particular platforms and social networks who have a clear responsibility, to act on the basis of an action plan aiming at a common European approach so that citizens are empowered and effectively protected against disinformation,” she said. “We will closely monitor the progress made and may propose further actions by December, including measures of regulatory nature, should the results prove unsatisfactory.”
As well as this new code, a number of other measures were proposed, including promotion of voluntary online identification systems to improve traceability and identification of suppliers of information, and further support of quality journalism across EU member states.
The Commission says it will now look to convene a forum for relevant stakeholders including online platforms, the advertising industry and major advertisers to secure a commitment to coordinate and scale up efforts to tackle disinformation.
Follow @videoadnews on Twitter