Why Edelman started its Counter Disinformation Unit
And what’s next in this evolving PR discipline.
In the last week alone, mis- and disinformation has engulfed the United States. A variety of misleading or purposefully wrong information around hurricane aid is hampering relief efforts and clouding evacuation efforts. Layer that on top of targeted, state-sponsored disinformation from Iran, Russia and China related to the election and it’s clear it’s harder than ever to tell truth from fiction.
That constant press of disinformation helped inspire Edelman to start a new Counter Disinformation Unit to help serve organizations striving to correct the record and protect themselves from false information.
Dave Fleet, Edelman’s head of Global Digital Crisis, said over the past few years, they’ve seen more organizations approaching them looking for help with this rising problem.
“We’re bringing in dedicated resources,” Fleet said. “We have a number of different partnerships with different vendors, and we’re really seeing this as a response to the increased challenges in this area that our clients are facing.”
While Edelman has offered mis- and disinformation services since 2021, the big difference now are the resources being applied directly to this threat, including the newly appointed Simon Paterson, as U.S. head of counter disinformation.
Here’s what Fleet attributes the rise global disinformation to – and what organizations can expect to face in the future.
Why disinformation is booming
Fleet outlined several areas he believes are driving the explosion of disinformation.
First is what he calls the “weaponization of culture,” which includes political polarization and decreasing trust in experts, such as academics, scientists and others who were once looked to as sources of truth.
“This belief that I can just go do my own research, my Google search is just as valid as a credentialed expert, leads to leads to a lot of misinformation,” Fleet said.
The overall geopolitical state of the world also contributes, with mis- and disinformation arising from foreign and domestic sources – and both targeted and organic origins.
And, of course, there’s AI. “That is increasingly impacting the work itself, but I also think it’s had an exponential impact on the visibility and the awareness of mis- and disinformation as a threat, and it’s led to it being at the top of corporate agendas,” Fleet said.
Some of those AI threats remain hypothetical, but others are here and real, Fleet said. Deepfakes can not only cause reputational harm by creating false narratives, they also impact businesses through cybersecurity risks, such as impersonating an IT leader to get a password reset.
“The core is, it’s getting it’s getting more difficult to counter, both in terms of its quality, its quantity, [and] its accessibility,” Fleet said.
How organizations can protect themselves
With so many threats looming, tracing back to from grandparents sharing AI memes on Facebook to nation states, what’s an organization to do?
The most important factor is to prepare. Now.
“What are the core aspects of your business and your core narratives that you need to be ready to defend?” Fleet asked. “Because you’re not going to be able to fend off everything; no one has the resources to tackle every little rumor. But what are the core narratives to defend? Who is actually active against you? What are the tactics, techniques and procedures? We call them, TTPs. What are they using?”
Once you understand the risks, it’s time to plan – which can look very different from planning for a more traditional crisis.
“I’ve done simulations with incredibly well-drilled companies, and from a crisis perspective, their processes just fall apart when a disinformation angle is introduced and it throws a wrench,” Fleet said.
It’s important as part of planning to train your employees to identify disinformation.
“It’s a pretty universal idea that you’re going to react negatively to being manipulated,” Fleet said. “And so teaching people the warning signs of manipulation and what to do if they see those warning signs: Running down the sources, going looking for a second source of information, actually looking at the credibility of the outlet, things like that — that’s really important.”
But manipulation can be more subtle than a deepfake image or a rumor on TikTok. Fleet said there have been instances of bots being used to make an event seem more significant than it is on social media, thus feeding the algorithm to show that content to more people and inflating its importance. Reports indicate that has been the case with anti-DE&I social media campaigns that were waged against John Deere, Tractor Supply Company and others.
“It’s a broader understanding of media literacy, but just as importantly, it’s understanding some of these signals, that something, some kind of manipulation, is afoot,” Fleet said.
That clarity is much easier to have before a crisis strikes.
“I think that the key for companies is not to wait for it to hit them,” Fleet said. “It’s for them to take the action now, to build that preparation and resiliency so that when it happens, they’re on the front foot.”
Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.