Since the explosive rise of generative AI tools such as ChatGPT and Midjourney, more and more news publishers have started to wonder about the possible impact that the new generation of AI could have on their companies and on the news business as a whole.
To take stock of the situation, we caught up with Ezra Eeman, Change Director at the Belgian media company Mediahuis, who in his newsletter Wayfinder has closely followed the emergence of generative AI and its adoption in the news industry.
We spoke to Eeman about how news publishers can get started with AI, how management roles will change with more AI adoption, and why the chat-based interfaces that AI tools use might threaten the current journalism business model. The following interview has been edited for clarity and length.
WAN-IFRA: When did you realise that AI would have a huge impact on the news industry?
Ezra Eeman: We had already been working with AI at Mediahuis, but it was the announcement of Midjourney version 4 and when ChatGPT became publicly available [in November 2023] that changed things. Suddenly it was possible for anyone to use these tools in an accessible and user-friendly way, and it became obvious that this is not just for data scientists and for people who work on the technical backends.
There’s a range of AI tools for various purposes, and it can all feel a bit overwhelming. Do you have a recommendation for how a curious news publisher can get started with AI?
I would start by thinking about the usual journalism workflow, from planning and sourcing to publishing and beyond. If you want to try AI, go where the opportunity is, where it can take away work that’s currently taking time and energy that you want to invest somewhere else. Like transcriptions, like adding metadata, or adapting content from one format to another. How many different social media formats do you have to create? How many different headlines do you need to generate? These are some of the tasks where AI can be a good assistant, and you can still have the final edit to decide what gets published.
At Mediahuis, in the discussions with our newsrooms, there’s relative openness on anything that’s in the beginning stages of the publishing process, like helping journalists transcribe or produce things. Or at the end, with adapting the content. But not so much for the creation phase, like inserting generated text and articles on our platforms. That’s where newsrooms feel a certain unease, which is easy to understand, of course.
See also: ChatGPT prompts: a real, non-AI generated handbook for journalists
What about the broader, strategic level? What should news media leaders and managers keep in mind regarding AI?
I think for a news media leader, it’s the value chain they should look at. Where in the value chain will AI be a disruptor? It will certainly be a disruptor in distribution, and in search, and in production as well.
If anybody can create anything, which sort of is the promise of “text-to-anything” tools, it also means that everyone becomes your competitor. There already is a maker economy that can create a lot of stuff, and now they have even more powerful tools to make even more content. That means the volume of content will explode.
As a news publisher, you might drown in that huge volume that’s being generated. So you have to ask yourself, why would people still come to your destination? Why is your news still distinctive? How do you differentiate yourself from everything that can be produced by AI?
Secondly, the whole discovery interface might change. The search-based internet, which is a huge driver of traffic to destinations like websites and apps, might completely change if you don’t have to search anything anymore. If you see the answers in a chat interface, why would you need to click on a link?
The link-based internet is really what powers a lot of publishers and the whole online ad system, and currently publishers invest in their own destinations in order to capture the value (audience data & monetisation). But if search as a traffic source is disrupted, it will become much harder to get that traffic to their own destinations. Especially as ChatGPT allows for plugins that can pull content into the chat interface, rather than you having to go somewhere else.
So if that’s where news content is, you wouldn’t really need to exist as an independent publisher anymore?
Yes, you would become a news provider in ChatGPT, if you play that game. Like when we previously became news providers in the social media environment, and then at some point we all turned away from Facebook and wanted to have our own apps, our own platforms.
Now we have a new interface where we have to make the same choice. Do we want to be in those new environments like ChatGPT and Bing, or any other upcoming search or conversation engine? Or do we still try to bring people to our destinations? And how do we do that if search doesn’t work anymore?
That’s the kind of thing that I would have on my radar as a media executive: how do we position ourselves in that new value chain? Do we partner with OpenAI to build a plug-in? Or do we fight them with regulation and rules? Do we make deals with them about the amount of content that they can cite, or how it can be cited? That’s the kind of strategic thinking that executives need to look at in this new space.
See also: AI companies crave credibility but it doesn’t come for free
So on one hand, AI tools can help journalists with their work, but on the other they might threaten the business model of journalism?
I think they create a new business model. And the question is, who are you in that new business model? Are you just a provider of content? Or do you have other powers as well?
It’s a different game if you are end-to-end in control of the whole value chain, or if you’re just one provider in that value chain. And the more you are in control, the better you can also monetise. If you’re just a provider, it is a weak position to be in.
Do you expect some newsroom roles to change with more AI adoption? If so, what could that look like?
I think there will be a need for a management layer that understands the impact and implications of AI and can quickly decide how AI will be used on a day-to-day basis. I think generally in the news industry, there will be guidelines that make sure humans stay responsible. Meaning, you cannot blame the machine for bad content, but humans stay editorially responsible.
That means if you use AI, you should be able to say where it went wrong if it makes mistakes. You should be able to explain it and be transparent. That requires a level of understanding, and if you don’t understand it, you cannot explain it.
On a newsroom level, you will need people that have at least a basic understanding of the concepts of AI and can work with it. But I’m less worried about that level because I think the usability of these tools is quite good.
And behind the scenes you will need more people who are able to layer proprietary data on top of large language models. For example, how do we unlock our archives with a large language model? How do we use a chat-based discovery on our own websites and apps?
Look at Bloomberg: they just released BloombergGPT, which is built on top of their financial data. That’s a very strong way of using AI in your own environment. You use data nobody else has to create new value that only you can offer.
For smaller publishers, that seems like a hard model to follow.
Yes, if you want to do something similar to a worldwide player like Bloomberg. But even for a local newsroom, it is not hard to build a custom GPT. There’s plenty of tools where you can infuse your own data. And you need a webmaster with a bit of tech-savviness who knows how to connect certain data. If you have a database of your articles and you can connect it to a GPT, you can make it searchable and conversational.
I don’t consider myself a super technical person, I can code a bit but wouldn’t be able to make an app, for example. But nowadays, with the right kinds of apps and tools, I can make my own GPT model. I have an API key from OpenAI, and I can use Zapier to connect it with certain databases.
It’s like intelligent Lego and using the right bricks. I don’t feel like I need to be a programmer, I just need to understand how to work with Lego.
What’s your feeling on how newsrooms view AI tools? Is there resistance to using them?
No, mostly curiosity. You can take away a lot of repetitive work with AI, and most newsrooms are very curious about that potential. Especially if it can give more room to journalism and less to digital grunt work. So I think there’s openness and even an invitation to have more AI in the newsroom.
As for the editorial part, like the idea of AI choosing what kinds of stories we write and how we write them, there’s a critical stand, a clear “No, that’s not where we want to go.” But I also think it’s a very healthy reflex to set boundaries.
Some newsrooms have set up guidelines or policies on the use of AI tools. What’s your view on that?
I think it’s crucial. At Mediahuis, with the new wave of AI, the first thing we said was, “We need to have guidelines in place.” Because the landscape is evolving so quickly, and we cannot track everybody [in the company] who wants to try an AI tool. We’d rather have clear guidelines and a framework that, if in doubt, anyone can just check.
These guidelines include things like, there’s always a human in the loop. Whatever you put out, the editor-in-chief is still responsible. And if you make mistakes, you cannot blame the machine. We also have rules about not using images where we’re not sure about the original source, so that we don’t violate IP.
Amid all the recent news, do you think there is an aspect of AI that hasn’t received enough attention?
I think there’s not enough strategic thinking about the impact AI will have on the news business model, how it might radically change how we make and distribute news. There needs to be more strategic, almost “Business School 101” level thinking. What happens to the news business model once you start infusing AI?
I also wonder what happens if a “rogue state” AI actor appears? At the moment there are the big players, and we’re tracking and monitoring them carefully. But more and more AI models are being forked by others, and I fear we’re not keeping our eye on the ball of what some hackers might do with this.
These language models can also code, so if I want to make a very nasty virus that attacks news websites, I could just ask that language model to code me new viruses. I could also give it the task to change the code a little bit every time the virus gets detected. There’s a part of the internet that always tries these kinds of experiments, and I expect those challenges will come up.
What’s the biggest danger related to AI in the newsroom?
That we lose control, and then we also lose trust. Without trust, the news business is out of business. If what you bring from a brand perspective is not trusted more than other content that floats out there, you have nothing you can bank on, and your business model is out of the window.
I think the biggest risk is that we’re going too fast and without thinking into AI. It doesn’t mean that we shouldn’t use it, but we have to do it in a very careful and transparent way. Because the risk of people starting to distrust us because we use AI badly will be irreversible, I think.
… and the biggest opportunity?
There are so many opportunities. Just today I heard about a new start-up Blockade Labs that makes immersive environments based on text prompts. You write, “Planet that looks like Mars,” and it generates a 3D environment. It’s mainly for gaming, and you can then use it in Unity, which is a game engine, so you can put a character in there that can walk around.
That could be an example of a news game, or giving people an immersive experience. I would work with NASA and say, “Let’s take all the footage that you have from Mars and make a 3D space where people can walk around.” There’s also plenty of historical moments where you might want to walk around, and we could do that with archive footage.
It might even make the Metaverse kind of reality! It’s kind of old news now, but I think Metaverse might come back, but maybe we’ll call it AI-verse because you will be able to prompt the Metaverse environment.
Has AI changed the way you work yourself?
I’m a big fan of the Steve Jobs quote, “Computers are like a bicycle for our minds.” You can ride faster with them. I use AI that way as well.
I really like ping-ponging ideas with ChatGPT, and I do a lot of Midjourneys as well. I do a random prompt every day, just to see what I get, which is sometimes completely gibberish. I like that dialogue that takes my input and structures it in a different way. Kind of like a co-pilot, or using it as a sparring partner, where you are still the owner of the idea and determine where you want to take it.
Sometimes you want to bounce a ball against the wall, but this time the wall gives you more than just a bounce back. It might bounce back 10 different balls in different colours.
The post ‘There’s not enough strategic thinking about the impact AI will have on business models’ appeared first on WAN-IFRA.