32 Dullah Omar Lane, Durban

Mitigating AI-driven disinformation during an electoral year

Disinformation refers to false or misleading information that is spread deliberately to deceive and cause harm to a person. Within an election context, disinformation typically involves the intentional spread of false information to undermine political adversaries, manipulating the voting process, or altering perceptions of the political landscape during an election. It is just one of the various strategies employed to manipulate electoral outcomes. In the past few years, there has been increasing concern about the ability of disinformation to influence the process and outcomes of elections and its potential threat to democracy. Moreover, the increasing influence of disinformation is one of several factors contributing to the strain on democracies globally.

In South Africa for example, there is now widespread acknowledgment that disinformation can play a significant role in shaping the results of an election. Electoral Commission Chairperson Mosotho Moepya says “The dissemination of disinformation has huge potential to undermine the fairness and credibility of elections.” This has led to the Electoral Commission and Media Monitoring Africa (MMA) to join hands with major social media platforms to fight the spread of disinformation, during and beyond the November 1, 2021 Municipal Elections. With the upcoming national election later this year, the intensification of political campaigning will likely lead to a notable surge in the spread of disinformation, misinformation, and fake news. This trend is exacerbated by the advancing capabilities of artificial intelligence (AI) tools, which render the distinction between truth and falsehood increasingly challenging. According to Carina van Wyk, head of education and training at Africa Check, “AI-generated disinformation is becoming more sophisticated and is increasingly difficult to spot”. Hence safeguarding people from AI-driven disinformation requires a collaborative effort involving governments, tech companies, civil society, and the public. It involves a combination of technological solutions, regulatory frameworks, educational initiatives, and international cooperation to effectively address the challenges posed by disinformation during electoral periods. These strategies are further discussed as follows:

Strategies that can be employed to combat AI-driven disinformation

This section discusses some strategies can be employed to combat AI-driven disinformation.

  1. Collaboration and Information Sharing: There should be collaboration among tech companies, governments, and civil society to share information and insights regarding emerging AI-driven disinformation tactics. With this collaboration the tech companies can develop and deploy advanced detection algorithms that can identify patterns and characteristics associated with AI-generated content. This algorithm should be regularly audited and evaluated for biases and unintended consequences. Moreover, the government can work closely with major social media and tech platforms to develop and enforce policies against the dissemination of false information. Also, the government should encourage platforms to enhance AI-driven tools for detecting and removing disinformation. It is essential to note that a collective effort is crucial for staying ahead of evolving threats.
  2. Fact-Checking: The government can invest in robust fact-checking mechanisms to quickly identify and counter false information. In addition, they can collaborate with independent fact-checking organizations to verify the accuracy of information circulating online. Human fact-checkers can provide context, critical analysis, and nuanced understanding that AI algorithms may lack.
  3. Media Literacy Education and Educational Initiatives for Politicians: The government should promote media literacy education to enhance the public’s ability to critically evaluate information sources. This includes educating individuals on how to discern between credible and misleading content, especially in the context of AI-generated information. Furthermore, the government should provide training for political figures and their campaigns on how to use technology responsibly. Equip politicians with the knowledge to identify and counteract disinformation targeting them or their opponents.
  4. Regulatory Measures: The government should implement and enforce regulations that address the misuse of AI in generating and spreading disinformation. These regulations should encompass both the development and deployment phases of AI technologies. This regulation should hold social media platforms accountable for the content shared on their platforms. Furthermore, they should collaborate with technology companies to develop and adhere to guidelines that prevent the misuse of AI for disinformation purposes
  5. User reporting Mechanisms: The government should establish user-friendly reporting mechanisms for individuals to flag potential instances of disinformation. Swiftly investigate and respond to user reports to address the spread of false information. Citizens should be empowered to be proactive in identifying and reporting AI-generated disinformation. By providing channels of communication with accessible reporting mechanisms and tools, the government can engage the public in the fight against false information.
  6. International Cooperation: The government can foster international cooperation and coordination to address the global nature of AI-driven disinformation. Collaborative efforts can help develop standardized approaches and share best practices in combating this issue. The government can share best practices, intelligence, and resources with other nations to collectively combat AI-driven disinformation.
  7. Early Warning Systems and Continuous Monitoring and Adaptation: The government can develop AI-driven early warning systems that can detect potential disinformation campaigns before they gain traction. This can be achieved by using predictive analytics to anticipate and mitigate the impact of false information on public opinion. Moreover, the government can also establish mechanisms for continuous monitoring of AI-driven disinformation trends. This involves staying vigilant and adapting strategies as disinformation tactics evolve.
  8. Algorithmic Diversity and Transparency in AI Use: The government should diversify algorithms used by social media platforms to disrupt the echo chamber effect that may contribute to the rapid spread of disinformation. Introducing variety in content recommendations can expose users to a broader range of perspectives. Similarly, they should advocate for transparency in the deployment of AI technologies, particularly in content creation and dissemination. Clear disclosure of AI-generated content helps users make informed judgments about the information they encounter.

In conclusion

The disruptive impact of disinformation in elections is expected to persist in the years ahead.
As South Africa prepares for its 2024 elections, it is susceptible to this threat of disinformation. The risks associated with false information  can overshadow the integrity of the electoral system, shaping public opinion, and undermining trust in the democratic process. However, to safeguard the democratic process requires a collective commitment to technological solutions, public education, regulatory measures, and international collaboration. These strategies can help mitigate the impact of AI-driven disinformation during an electoral year and safeguard the democratic process.

Dr. Lizzy Oluwatoyin Ofusori is an academician and a researcher. She writes in her capacity.