Major publisher trade groups and organisations from across the world have signed a joint letter calling for the development of a legal framework to protect publisher content which powers AI applications, and to maintain trust in news content as generative AI plays a growing role in content creation.
Signatories of the letter include the News Media Alliance, which represents over 2000 news businesses in America, and the European Publishers Council, which counts Axel Springer, News UK, The Guardian, DMG Media, Condé Nast, and G+J among its members. The Associated Press, Agence France-Presse, and Gannett, and a number of other publishers and trade groups have also signed.
The publishers involved are specifically advocating for the following:
Transparency as to the makeup of all training sets used to create AI models.
Consent of intellectual property rights holders to the use and copying of their content in training data and outputs.
Enabling media companies to collectively negotiate with AI model operators and developers regarding the terms of the operators’ access to and use of their intellectual property.
Requiring generative AI models and users to clearly, specifically, and consistently identify their outputs and interactions as including AI-generated content.
Requiring generative AI model providers to take steps to eliminate bias in and misinformation from their services.
The signatories say they are supportive of both industry action and government regulation to achieve these aims, though they clearly want regulation to play a significant role, specifically calling for legal frameworks to guide the use of AI in news.
Without action, the publishers and trade organisations which penned the letter say that generative AI risks undermining their core business models, by essentially freely distributing content which they pour time and money into producing. “In addition to violating copyright law, the resulting impact is to meaningfully reduce media diversity and undermine the financial viability of companies to invest in media coverage,” says the letter.
It also emphasised the risk that AI-generated news stories, which for the time being at least are still prone to error, will reduce public trust in news reporting. This may happen either unintentional spreading misinformation via unchecked AI-generated news content, or the intentional spread of disinformation. The ability for bad actors to quickly generate a flood of conflicting information on an ongoing news story “can distort facts and leave the public with no basis to discern what is true and what is made up,” says the letter.
A coordinated response
Publishers and broadcasters have been quick to recognise both the opportunities and the threats which are presented by the rapid development of generative AI. As such, many – including several of those represented by this letter – have already struck individual deals with AI companies, or begun conversations over remuneration for AI’s use of publisher content.
But as is often the case for the embattled news publishing industry, businesses have to tread a fine line between looking out for themselves and coordinating with the wider industry. Publishers are keen to strike individual deals with AI businesses to power content creation, or find other ways to improve efficiency in their business models, to give themselves an edge over their competitors. But without coordinated action, the news industry risks ceding too much control to tech companies, which could be very harmful to the wider industry in the long run