AI in Fact-Checking- Empowering Journalists for Accuracy and Efficiency

topImage

The rise of artificial intelligence (AI) in journalism has sparked skepticism, but some journalists argue that responsible and collaborative use of the technology can empower fact-checkers to enhance their efficiency and accuracy.

The key, according to Nikita Roy, a Knight Fellow at the International Center for Journalists, is to limit generative AI's role in journalism. “Fact-checkers should use the tools for ‘language tasks’ like drafting headlines or translating stories, not ‘knowledge tasks’ like answering Google-style questions that rely on the training data of the AI model,” Roy explained during a panel at GlobalFact 11, an annual fact-checking summit hosted by Poynter’s International Fact-Checking Network.

Roy emphasized the importance of journalists understanding AI's impact on information ecosystems. “If we use it responsibly and ethically, it has the potential to streamline workflows and enhance productivity. Every single minute misinformation is spread online, and we delay in getting our fact checks out; that’s another second that the information landscape is being polluted,” she added.

Transforming the Information Landscape

Generative AI presents a paradox: it makes it easier for bad actors to spread misinformation, but it also allows fact-checkers to build tools without specialized skills. Rubén Míguez Pérez, chief technology officer at Newtral, highlighted this shift. “Two years ago, building a claim detection system would require the help of a data scientist. But today, anyone can build one by providing a prompt to ChatGPT,” Perez,  said at GlobalFact 11.

This democratization has led to the development of tools that support early research. A 2023 survey of International Fact-Checking Network signatories revealed that over half of the respondents use generative AI to identify claims to fact-check, as noted by Andrew Dudfield, head of AI at Full Fact. However, Dudfield cautioned that AI cannot replace human judgment in fact-checking. Reviewing past fact-checks, he found that most required consulting experts and cross-referencing sources—tasks that generative AI, reliant on existing knowledge, cannot handle effectively.

Other applications of AI in journalism include summarizing documents, extracting information from multimedia, and creating accessible content like alt text for images. Roy noted that these tools can save time and broaden the reach of fact-checked information.

The Risks of AI Hallucinations

One of the most significant concerns about generative AI is the risk of “hallucinations”—when the technology generates inaccurate or nonsensical information. To address this, Rakesh Dubbudu, founder and CEO of Factly Media & Research, recommended curating datasets for AI tools. His team, for example, built a large language model using a database of Indian government press releases, reducing inaccuracies and copyright issues. “A lot of news agencies today have hundreds of thousands of articles. Converting those knowledge sources into chatbots is a great starting point because that is your own proprietary data,” Dubbudu explained at the summit.

The consequences of neglecting these precautions are significant. Cristina Tardáguila, founder of Lupa, warned that poorly managed AI tools could lead to misuse by bad actors, as seen with a fact-checking chatbot likely run by a programmer from Russia, a country where journalism is heavily restricted.

How Fact-Checkers Are Leveraging AI

Many fact-checking organizations are experimenting with AI tools to improve the speed and accuracy of their work. For example, Faktisk Verifiserbar, a Norwegian fact-checking cooperative, uses AI-based platforms like GeoSpy to geolocate images and videos. This tool matches unique features from photos against geographic data, helping verify content from conflict zones, such as during the ongoing Gaza conflict.

They also use tools like Tank Classifier to identify military vehicles and language detection software to analyze audio and video content. According to Henrik Brattli Vold, a senior adviser at Faktisk Verifiserbar, AI tools such as ChatGPT enable faster prototyping of interactive maps for visualizing investigations. Despite its usefulness, ChatGPT occasionally produces errors, requiring manual verification of every data point.

In Georgia, MythDetector, a fact-checking platform, is experimenting with AI to flag harmful content. They use a matching mechanism where labeled false information is cross-referenced to identify similar content. However, Editor-in-Chief Tamar Kintsurashvili notes that AI tools struggle with the Georgian language, leading to mismatches and inefficiencies.

Challenges in Non-Western Markets

Fact-checkers in countries like Ghana face significant hurdles in implementing AI solutions. GhanaFact, a verification platform, is cautious about applying AI due to its inability to grasp local nuances. Rabiu Alhassan, GhanaFact's Managing Editor, points out that most AI tools are trained using Western data, which undermines their accuracy in African contexts. A study by Rest of World confirmed this, revealing biases in AI models and poor performance in languages like Swahili and Bengali.

Another challenge is the linguistic diversity within countries. In Ghana, where over 100 local languages are spoken, English-trained AI tools often fall short. Alhassan emphasizes the need for tools tailored to the unique linguistic and cultural contexts of the Global South.

Similarly, in Georgia, the lack of support for minority languages like Armenian and Azerbaijani complicates fact-checking efforts. AI companies have yet to prioritize these smaller language models, despite their growing significance in combating misinformation during elections.

The Role of Big Tech in Combating Disinformation 
Some experts believe that AI-generated images might create a new crisis for fact-checkers

As misinformation surges in election years, the role of Big Tech platforms becomes critical. However, companies like Meta and OpenAI have been criticized for neglecting smaller markets. Meta’s planned shutdown of CrowdTangle, a tool used to track content on social platforms, has drawn backlash from fact-checkers like MythDetector, who rely on it to monitor digital narratives.

The situation is particularly dire in Africa, where content moderation remains insufficient. A report by Jacobin highlights how underpaid moderators in the region struggle to keep up with the growing tide of AI-generated misinformation. This lack of investment in non-Western markets underscores the disparities in Big Tech's global approach to disinformation.

The Path Forward: Local Solutions and Global Collaboration

To address these challenges, a multifaceted approach is required. Fact-checking organizations in smaller markets are beginning to develop localized solutions. GhanaFact, for instance, is creating guidelines to incorporate AI responsibly while considering the region's unique needs. Similarly, collaborations between universities and newsrooms, like those in Norway, demonstrate the potential of combining academic expertise with journalistic integrity.

Global collaboration is also essential. As researchers like Dr. Scott Timcke and Chinmayi Arun have emphasized, AI tools must be developed with greater sensitivity to the cultural and linguistic contexts of the Global South. Investment in training models for minority languages and underrepresented regions could bridge the gap between technological capabilities and real-world needs.

A Call for Collaboration

Tardáguila urged journalists to collaborate in building reliable AI tools, citing the success of the Coronavirus Facts Alliance. “When dealing with this new, yet to grow, devil of AI, we need to be together,” she said. “We have to have a real group, a tiny community, a representative group that is leading the conversation with tech companies.”

This sentiment echoes warnings from a Nieman Lab article about the risks of relying too heavily on AI for verification. While AI systems have shown promise in detecting manipulated media and fake news, their inability to grasp context and nuance can lead to false negatives or positives. Journalists must therefore remain vigilant, ensuring that AI complements rather than replaces their expertise.

AI’s Double-Edged Impact

Generative AI has undeniably reshaped the information ecosystem, offering both unprecedented risks and opportunities. While it can amplify disinformation campaigns, it also provides fact-checkers with powerful tools to debunk falsehoods. The key lies in addressing the disparities in AI’s accessibility and effectiveness across different regions.

As the world becomes increasingly interconnected, the fight against disinformation requires a concerted effort from fact-checkers, technology companies, and policymakers. By prioritizing inclusivity and investing in underrepresented markets, AI can become a force for truth rather than a tool for deception.

There are no reviews yet. Want to leave a review? Just log in or make an account!
User comment
  
Recommended News
We are loading...