This article, authored by Lindsey Jones, was first published by Women in News
AI: how freaked out should we be?
AI could replace 300 million jobs.
Nearly 50 news websites are AI-generated.
Godfather of AI quits Google and warns of misinformation dangers.
All of these headlines – and more – have been published in the past few weeks. Disruption is undoubtedly here, and generative artificial intelligence (GAI) models, such as Open AI’s ChatGPT, have unleashed much noise about their impact on the media industry – and how they may shape its future.
Accuracy and trust are at the core of these issues, as well as the impact on jobs.
The potential to transform most, if not all, roles is clear. Everyone from reporters and news editors to senior executives and back-end office jobs in HR, marketing and accountants will likely be affected.
Many news groups already use some form of automation, such as AI-generated listicles or animated videos clearly labelled as automated content. Kuwait News went further recently with an AI-generated news presenter.
Undoubtedly, GAI tools will make newsrooms more efficient with further streamlined workflows that can shave hours off a current working day, creating time for labour-intensive investigations and originating content.
Journalists can and will also use GAI to brainstorm, storyboard and shape thoughts. They will need to be vigilant and do research before using them to limit the risk of inaccurate reporting because the tools can not only make mistakes – called “hallucinations” – but also contain biases, having largely been developed by a US white male workforce.
These practices will raise questions for news organisations, regarding the formulation of and transparency around company-wide policies, let alone broader regulation of AI globally.
News groups will need to consider the following:
transparency – so users can understand how AI is being used and works
ethics – AI should be deployed in ways that uphold ethical standards, such as accountability, fairness and privacy
data quality – review and monitor the quality of your data, which can be biased and if left unchecked, can skew results
verification methods – fact checking, verifying the source of origin, and monitoring need to be strengthened and ingrained in newsrooms
staff skill sets – training in AI skills should be paramount so editorial departments can understand and consider the impact on journalism
collaborate with data scientists and journalists to ensure the best possible use of AI.
These issues are important because of the trust challenge that news organisations face. Trust in the news is on a decline – only 42% of people globally say they trust mainstream media, according to a Reuters Institute of Journalism survey.
Trust is one of the most transversal themes in innovation. It is about believing that something is high quality, honest, reliable and accurate.
One of the problems for newsrooms is that it is becoming easier to create deepfake photos that may even relate to real news events, such as the viral fake images of Donald Trump being arrested.
Reputable brands can all too easily publish these types of fakes. For example, the Irish Times had to apologise recently for a hoax AI story on women’s use of fake tans. Or they can go viral, such as the fake AI-generated photo of a Pentagon explosion that prompted a brief dip in the US stock market.
Fake images will not necessarily mean people are tricked more easily, but they could become more sceptical of the information, pushing journalism into a vulnerable position.
For news groups, to be a trusted source of information is also part and parcel of the company’s value proposition.
So what should media groups do as the danger of misinformation, disinformation and influence is increasing? They should set up safeguards such as:
establish guidelines on how to use generative AI and publish them on your website for transparency. See the examples of Wired or the Financial Times
clearly label AI-generated content and vigorously source your material to build trust with users
enhance a fact checking and monitoring desk, including the origin of images.
always have a journalist in the AI newsroom workflow and the Editor should still be ultimately responsible for all published content.
Finally, those news groups that do survive the impact of GAI are likely to have similar traits. They will be prepared, are flexible and can adapt rapidly to change. They are also likely to have high levels of trust and responsibility.
These traits will help them to navigate an AI world where there is likely to be a further explosion of content creation in an overcrowded market. These groups should be able to work out their value proposition quickly and continue to differentiate themselves in a marketplace where users will be considering whether they care enough to pay for content that has been created by a person.
Lyndsey Jones is the author of Going Digital, a transformation consultant and coach. She works closely with WAN-IFRA Women in News’ Digital ABC programme. She is also a former executive editor of the FT.
The post What are the pros and cons of using Artificial Intelligence in your newsroom? appeared first on WAN-IFRA.