Battling Disinformation While Enabling Innovation
In 2024, over 30 countries will hold general or presidential elections, including the U.S., Russia, Indonesia, Mexico, Taiwan, and the European Union — impacting a collective population of approximately 4 billion people. As voters prepare to head to the polls, concerns are growing over advances in artificial intelligence that are making disinformation more widespread — and dangerous — than ever.
To learn about how AI may affect voter behavior, campaign strategies, and election outcomes, the FiscalNote Executive Institute hosted an invite-only conversation with policy leaders, risk experts, and strategic advisors on September 14, 2023. Featured speakers included:
- Beth Sanner, National Security Expert, Senior Advisor, and CNN Contributor
- Rick Fromberg, Senior Advisor, BerlinRosen
- Josh Haecker, Head of Product, Geopolitical and Market Intelligence, FiscalNote
Below are key takeaways from the discussion, “AI and the 2024 Elections: Battling Disinformation While Enabling Innovation.”
Political trust deficit
- Trust deficit was wide — and growing — long before generative AI arrived. Disinformation by humans, after all, has always been a problem.
- But AI threatens to make the trust deficit wider. Today, digital technology can make realistic propaganda on its own, from text to photo to audio to video. Machines can also create misinformation on a scale that was previously impossible.
AI in elections
- U.S. adversaries will exploit AI. Russia and China, in particular, now possess better tools to advance their anti-U.S. agendas, including by supporting sympathetic political candidates and disseminating propaganda.
- More potential threats. The ubiquitousness of AI means that the tech is no longer limited to deep-pocketed governments and corporations. Smaller states, as well as disgruntled individuals, can deploy AI with similar ease, too.
- A global threat. The U.S. is a big target for AI-powered disinformation, but it’s far from the only target. Free and open societies everywhere are especially vulnerable.
- Micro-targeting. In the U.S., the proportion of swing voters is near record highs — in the 2024 presidential election, for instance, a mere 50,000 voters in just five states could potentially decide the outcome. With the help of AI, political campaigns can now quickly and cheaply strategically target highly specific, small voting blocs — such as left-handed women with red hair ages 35 to 39 who love espressos and Taylor Swift, for example.
- Not a kitchen table issue. Concerns among political and business elites about the big risks posed by AI haven’t yet trickled down to the general population.
- Tech paradox. As improvements in AI and other technology make it easier to deceive, many people will become more skeptical and, in turn, become harder to deceive.
- Apathy risk. AI-powered manipulation may ultimately do its greatest damage to democracy by depressing voter turnout and eroding people’s faith in representative government.
Responding to the threat
- The best defense is a good offense. Companies and governments should proactively explore how to deter the misuse of AI. For example, beginning in November, Google will require political advertisers on its platform to disclose any use of AI when creating ads.
- Blue checks for all. Consider encouraging people and organizations to use their unique digital identities widely to make impersonation harder on the internet.
- Fix the regulatory confusion. Still no consensus on regulatory best practices — is the EU too aggressive in its tech regulation? Is the U.S. too lax?
- Ad-hoc social media governance is still an issue. More guardrails may be desirable but are unlikely to be implemented in a divided Congress. The upshot: social media continues to be largely self-regulated, which heightens risks.
- Improve digital literacy. More effort to educate students and voters about the potential benefits and risks of AI will be needed. Finland offers a potential model of what effective digital literacy education looks like.