JSF Socials Recap: The Impact of AI on Press Officers and Journalists

Every two months, JSF organizes informal gatherings called “JSF Socials,” during which two experts are invited to talk about topics selected by the JSF community. The main aims are to create discussion, but also build community among both Japanese and international science communicators employed in Japanese universities and other institutions.

The JSF Social on September 21, 2023 discussed the recent rise of artificial intelligence (AI). Tools like ChatGPT have sparked global discussion on AI’s role in our daily lives. AI can be a great assistant in the lives of science communicators, by taking away a significant amount of preparatory work. However, AI can also be a culprit in the generation and spreading of misinformation. We need to remain critical of AI’s strengths and limitations, and eventually, we will require AI tools to combat AI. 

JSF invited two experts, Sabine Louët and Daisuke Furuta, to offer their perspectives.

Sabine Louët (left) has over 25 years of experience in journalism and science communication, and has recently launched a platform called ScioWire that delivers AI-produced summaries of the latest research papers. Daisuke Furuta (right) is a journalist and editor-in-chief at the Japan Fact-Check Center. Together, they discussed how AI can be made more accessible, but also what AI brings to so-called “information disorder.”

Sabine started her talk by stating that, before effectively communicating your scientific message, the content behind the message needs to be summarized. The problem in current society is that people lack the time to fully read each scientific article that’s released in their field. That’s where Sabine’s AI tool ScioWire comes in. By feeding the platform specific keywords and filters, ScioWire will find the most recent open-access papers, and automatically create contextualized summaries including the answers to questions such as Who did the work? Where did they  publish? When? What did they find? How? What is the future? In addition the tool elucidates acronyms, and provides lay definitions of technical terms. Unlike large language models such as chatGPT, ScioWire uses natural language processing, which ensures that key points from scientific studies are extracted from the text.

Sabine’s key message, however, was made clear during the discussion section of her talk, where questions focused on the tool's properties: can it target a particular audience, can it handle multidisciplinary studies, is it limited to certain fields? Sabine’s answer was that ScioWire can significantly reduce the amount of content that people have to deal with to keep up with their field. But, it is only a summariser, and while other AI tools can be used alongside ScioWire, AI cannot completely do the job for you. Sabine emphasized that we have to understand the tools' limitations to make proper use of them, and spread accurate information, rather than misinformation.

This final statement neatly led into Daisuke's talk. Since last October, Daisuke has been chief at the Japan Fact-Check Center, which publishes more than 10 fact checks and short videos every month on Yahoo, Facebook, Twitter, Instagram, and Tiktok. They tackle current topics like the COVID-19 vaccine, the war in Ukraine, and most recently, the release of water from the Fukushima nuclear power station.

And evidently, his work is becoming increasingly important, because disinformation is spreading in Japan. The most famous example in Japan concerns a 2022 typhoon. Images showed a “flooded” Shizuoka, but these images were created by AI. News organizations quickly recognized it as a lie, but this wasn’t always the case. A bombing near the Pentagon in the USA, which didn’t happen, was still reported by Japanese media. Fortunately, AI is not very skilled at detailing, yet. Unrealistic elements in pictures and fact-checking mainstream media can still let us distinguish what’s real and what’s fake. But, the field is developing, and we need a solution before quantity becomes a problem: AI can work 24/7, whereas human fact-checkers cannot. 

Along with that, discussion also highlighted the human element in the problem: what if the audience doesn’t trust mainstream media for fact-checking, and how can you discuss controversial topics with someone who is convinced that misinformation is true? These questions have no easy answers, but are important when considering the impact of AI misinformation. Studies have found that over 20% of people in Japan believe in conspiracy theories, such as the theory that a very small group of people controls the entire world. Daisuke explained that scientific and media literacy is low in Japan compared to other countries, meaning that fewer people know how to recognize misinformation. 

Misinformation is not a recent phenomenon. A well-known example is the Great Kanto Earthquake, where a rumor of Koreans rioting during the aftermath and thereby worsening the number of casualties led to the massacre of Koreans in Japan. This shows why we should care: if unattended for long enough, fake news can become facts, and this can have detrimental effects. Daisuke urged that the vastness of AI problems can only be solved by AI. This requires cooperation from all parties involved: science communicators, policy makers, but also the public: everyone needs to have good scientific and media literacy. That leaves an important role for JSF as a forum for collaboration!

Taken from both talks, AI has huge potential for scaling up the work of science communicators, by cutting down on preparation time. But, we must make sure that the right points are considered, and the right research is included. With this and the growing amount of disinformation in mind, Sabine and Daisuke both confirmed that ultimately, we need AI tools to counter AI disinformation. Only then can we benefit from AI's potential to improve our lives.