Global Voices and AI in the newsroom

Free colorful technology lights background, public domain CC0 image.

As artificial intelligence programs become more and more accessible online, we decide to weigh in and clarify where Global Voices stands on this issue of incorporating AI-generated material into our work. Our views on this issue are ever-evolving, and we — like many newsrooms and civil society spaces — are still trying to figure out how to utilize this technology fairly and ethically.

We recognize that this is a particularly polarizing and controversial topic. We remain committed to keeping our human-centered newsroom just that — human — however, we believe that some AI technology might help make our work more efficient and help us engage with our readers in meaningful ways. Here are some of our thoughts on employing AI technology in the newsroom:

Writing: 

We do not accept stories written by AI at Global Voices, and our Regional Editors need to explain this to their teams. However, certain AI tools, such as Chat GTP, Bard, or DeepAI, can be used to complement the writing or research process and may be useful in speeding up the article-writing process. For instance, AI can be used to:

  • Brainstorm headlines and rephrasing.
  • Check whether a piece of writing contains plagiarism or AI-generated text.
  • Find reliable resources for fact-checking.

Translation:

It is okay to use AI for translation, including short quotes, provided a human proofreads the final version, as AI does not recognize words with different meanings and still makes significant mistakes. We will not accept copy and paste from Google Translate or DeepL without additional human editing and proofreading. 

Images:

While we are currently okay with accepting some AI images, two basic issues complicate our decision in this area. The first is how we generate these images, and the second is whether we can rely on images being original and not copied accidentally from somewhere else by AI. Some of the most popular AI image generators are Dall-E2 and Midjourney, to name a few. Currently, neither of these sites offers citations for where they get their original source art, which could prove complicated if they are inadvertently taking material from artists. 

While we as a newsroom are still assessing the practice of utilizing AI generated-images, we tentatively believe that it is okay to use AI-made images that remain abstract or artistic (geometric figures, collages, clearly fictionalized dystopian imagery ) as long as we mention the source in the caption. 

We should not use AI images that are realistic and might be viewed as ‘real images of real people or events.’

Videos

Unless we are writing about an altered or manipulated video, we should not include any videos that have been manipulated by AI. This is essential to maintain our credibility and trustworthiness. If we do include a manipulated video — in cases where it is relevant and essential for a story — this must be clearly noted, and the manipulations must be described and pointed out in the text.

At Global Voices, we plan to continue monitoring how this technology changes and evolves and will update and adjust our stance and procedures on dealing with AI on an ongoing basis. For more resources discussing the impact, ethics, and questions of AI in the newsroom, see this ongoing discussion series from Reporters Without Borders (RSF) and these guidelines released by The Associated Press.

Start the conversation

Authors, please log in »

Guidelines

  • Please treat others with respect. Comments containing hate speech, obscenity, and personal attacks will not be approved.