WAN-IFRA participated in the working group convened by RSF to draw up an ethical charter to guide the use of AI in Journalism, which is published today (Download). The process was initiated in response to the disruptive threat that AI posed to journalism and the news and information arena.
After participating and constructively contributing to the discussions, WAN-IFRA has elected not to endorse the final version of the Charter due to specific concerns from publishers.
Meanwhile, WAN-IFRA has reconfirmed its support for the Global Principles for Artificial Intelligence, a standard agreed upon by news media publishers from across the globe that determines efficient safeguards to the interests of content creators, publishers, and consumers alike.
The Paris Charter is a ten-point framework to guide the use of AI in journalism and media by media outlets and media professionals across the globe. These points, broadly, are:
Journalism ethics guide the way media outlets and journalists use technology.
Media outlets prioritize human agency.
AI systems used in journalism undergo prior, independent evaluation.
Media outlets are always accountable for the content they publish.
Media outlets maintain transparency in their use of AI systems.
Media outlets ensure content origin and traceability.
Journalism draws a clear line between authentic and synthetic content.
AI-driven content personalization and recommendation upholds diversity and the integrity of information.
Journalists, media outlets and journalism support groups engage in the governance of AI.
Journalism upholds its ethical and economic foundation in engagements with AI organizations.
These points, arrived at through consultation with representative organisations, are generally welcomed as an initial framework for news organisations and journalists to consider how they engage and use this rapidly evolving technology, said WAN-IFRA CEO Vincent Peyregne.
WAN-IFRA’s principal concern is point 3 – the call for AI systems used in journalism to undergo prior, independent evaluation.
“Our intention in joining the working group was to ensure the broadest possible adoption and the best possible chances of success, in particular by counting on the support of news media companies, which is essential if this Charter is to become a benchmark for the profession, journalists and publishers included. With this in mind, it is regrettable that we are unable to endorse this version, particularly with regard to the unrealistic prospects of section 3,” commented Vincent Peyrègne, CEO of WAN-IFRA.
The section states that “The AI systems used by the media and journalists should undergo an independent, comprehensive, and thorough evaluation involving journalism support groups. This evaluation must robustly demonstrate adherence to the core values of journalistic ethics.”
“The decision about what AI systems are adopted in a news media company needs to stay with the company itself and should not be dependent on an external evaluation“, said Peyregne. Companies will draw up their own guidelines for the use of AI, based on their own criteria established in coordination with their teams – including editorial. Their choices will also be based on commercial factors. We believe publishers, who hold the ultimate legal responsibility for what they publish, need to create their own safeguards and editorial standards. It is not realistic, nor desirable, that these decisions be outsourced to an undetermined external body.
Martha Ramos, President of the World Editors Forum, who represented WAN-IFRA in the working group, said that a prior evaluation of AI tools seemed unrealistic given the way and speed AI is evolving and risked interfering with the internal policies of editorial companies.
The post Charter for AI and Journalism: tech standards must be the responsibility of publishers appeared first on WAN-IFRA.