Deutsche Welle turns 70 this year. Since the beginning, we have followed a clear mission to provide trusted, quality, independent journalism to our audiences worldwide.
The ways we have done that have changed over the decades. We have constantly evolved. From radio to television to the internet to social media, we have adapted the way we gather information and produce our journalism to help us reach you.
However, our high journalistic standards have always remained the same.
Now we are all standing in the middle of the next big change. Generative AI is a disruptive force. Many industries will experience, or are already experiencing, significant changes and journalism is no exception.
Transparency is one of the most important pillars of our standards, so I would like to share with you our thinking on generative AI in our journalism.
Journalists in control
DW is firmly committed to journalism that is produced by people. We do not see generative AI as a means to replace the work done by our journalists and editors. That means if you are reading, watching or listening to a piece of DW's journalism, you can be sure DW journalists are responsible for it.
Our journalists guarantee our quality. Our commitment to independence, diligence, transparency, respect, diversity of opinion and diversity applies no matter what.
At the same time, we are keen to investigate how AI tools can support our journalism.
DW already uses AI-based tools to aid our journalists in their work. For example, these tools help us to analyze large data sets for stories and to adapt our pieces from one language to another. We can use AI to improve the search engine performance of our stories or support social media colleagues in quickly identifying hate speech. AI-generated subtitles on videos make them accessible to a much greater number of people. We will look for ways to build on these advances.
However, our journalists will continue to control all applications and thoroughly review anything before publication. We will always be open and transparent about the ways in which AI is used.
AI chatbots, such as ChatGPT, cannot be relied upon as accurate sources of information. They can provide inspiration but not reliable information. They make mistakes and are not transparent about where they draw their information. We will verify any information we know comes from a chatbot in the same way we verify information from other sources. We will not use AI chatbots directly as sources.
We will also expand our fact-checking capacity. Generative AI makes it easier to produce and spread disinformation around the world. It is our job as journalists to expose this disinformation.
AI-generated images
We also see no value in publishing photorealistic images generated by AI. This means we will not generate this type of image to use in our work. If we need to report on AI-generated images that others have produced, we will make it explicitly clear that the images are not real.
However, we may use generative AI to create or improve illustrations or data visualization.
Challenges and chances
We know there are dangers to generative AI. We know that it may well help spread disinformation on a much larger scale than before. We know that the datasets they are built upon reflect biases that exist in society. We will continuously train our journalists to recognize these biases. We will also treat data security and privacy extremely seriously when using AI products and services.
The sensible use of AI can also be an opportunity. It can support our journalists in standard tasks and give them more time to get away from their desks to explore and tell the stories that matter, talk to people on the ground and get different perspectives. These are the stories that our users value and that machines can never deliver.
Original, exclusive and at the heart of the issues people care about — with, of course, the same quality that DW has provided for the past seven decades.