On February 8, the FiscalNote Executive Institute hosted a virtual strategy session, “A World Warped by Misinformation: How Companies Can Mitigate Reputational Risk.” Featured speakers included Josh Haecker, CEO at Predata; Dr. Megha Kumar, Deputy Director of Analysis for Cybersecurity and Technology at Oxford Analytica; and Katie Harbath, Founder and CEO at Anchor Change.
Dr. Megha Kumar
Deputy Director of Analysis for Cybersecurity and Technology, Oxford Analytica
Founder and CEO, Anchor Change
Below are some key insights from the discussion:
Misinformation is a threat
- Dollars and cents. Misinformation can be very costly to the bottom line, especially when companies don’t respond well to the messaging, and don’t have a plan in place to address it.
- More than the bottom line. Misinformation can also inflict great damage beyond individual organizations, such as by undermining public health, democracy, and general trust in the bedrock institutions of society.
- A new story. Misinformation — and its “cousin,” disinformation — are as old as language itself. What’s different now is the ability of the internet, social media, and other technologies to sharply amplify misinformation. Rapid improvements in AI, such as ChatGPT and “deep fakes,” will further facilitate the spread of misinformation.
- Difficult to monitor. Today, the abundance of sources and platforms where people obtain information can make tracking misinformation incredibly challenging.
- Beware of “modest” misinformation. Because fewer people are liable to believe misinformation that seems like pure fantasy, the most harmful misinformation often contains substantial elements of truth.
- Not always an external problem. Misinformation can come from within your organization, too.
- Climate of distrust. Don’t forget that factual statements are often seen as partisan opinions.
- Confirmation bias. People’s varied perspectives — including divergent moral, ethical, and philosophical leanings — can make them vulnerable to misinformation in different ways.
- Head in the sand. Many companies have not thought hard enough about how to respond to misinformation, including the risk of impersonation. For example, do you control all the obvious social-media handles for your company, on Twitter and elsewhere? (Eli Lilly didn’t and paid a steep price after an impersonator tweeted fake news that sent the firm’s share price plunging.)
- Follow the misinformation, part 1. Is the misinformation directly about your organization — or about issues that may affect your organization indirectly?
- Follow the misinformation, part 2. Understand the affected audience and how much their perspective will affect your company’s brand. What are your key stakeholders saying about the misinformation?
- Get cross-functional. Consider creating a “social issues committee” with company representatives from communications, government affairs, legal, operations, and other departments.
- Speak with one voice. Having a single company response to misinformation is vital.
- Seek out different news sources yourself. To avoid groupthink, it can be helpful to get information from varied perspectives.
- Offense or defense? Sometimes it’s better to directly rebuke misinformation; at other times, it may be better to ignore it and not amplify it.
- Communicate in plain English. Because the real world is complicated, providing facts in an accessible manner is often challenging — but is especially critical when facing misinformation.
- Prepare in advance. Role-play and strengthen your existing crisis-management and communications processes before misinformation strikes your firm.
- Don’t go it alone. Consider hiring respected, independent, fact checkers to help shoot down misinformation.
- Stay human. Tech can help counter misinformation, as well as create it. But human judgment remains critical when fighting misinformation. That’s one reason it’s important to hire and retain skilled people to put out fires.
Prepare for uncertainty
- Corporate uncertainty. If the next recession hits the tech industry hard, will firms decide to allocate fewer resources to combatting misinformation on their platforms? In recent years, platforms have considerably tweaked what they show audiences, in response to political, financial, and other pressures.
- Political uncertainty. 2024 could be a big year for political misinformation, with presidential and parliamentary elections expected in many large countries — including the U.S., India, Indonesia, Ukraine, Taiwan, Mexico, and the U.K.
- Legal uncertainty. In 2023, the U.S. Supreme Court will decide whether tech platforms can be sued for the content their algorithms recommend. And in 2024, the court will determine the constitutionality of various new state laws — such as in Florida and Texas — that prohibit platforms from removing political content.
- Misinformation as an excuse for control. As the threat of misinformation grows, so too does the risk that authoritarian governments increasingly use that threat to justify limiting speech and otherwise restricting access to information.