Security news that informs and inspires

The Growing Overlap of Disinformation and Security

By

SAN FRANCISCO--There’s a lot of misinformation about disinformation, especially when it comes to its effects on security. And for enterprise security teams that haven’t yet devoted the time and resources to understand those effects and how to deal with them, the future is coming fast.

Disinformation as a tactic is as old as the hills, and many types of organizations employ it, trying to advance their own aims while intentionally misleading their adversaries. Militaries, intelligence agencies, and national governments all use disinformation, and while it’s often associated with political campaigns or elections, disinformation also has quite a bit of utility in cybercrime and nation-state cyber operations. It’s not uncommon for an APT group to adopt the tactics of another group, especially one from a hostile country, in order to confuse defenders and researchers. And some groups have been seen using the tools and even the infrastructure of other teams, as well.

In some operations, adversaries are intent on disguising their true motives and goals, but in others they want those things to be plainly obvious. All of this can make things difficult for security teams trying to unravel what’s going on, and the use of disinformation in cyber campaigns is likely only going to increase as actors continue to become more sophisticated and strategic.

“For people who think disinformation and cybersecurity aren’t related, things are going to change very quickly for you. The issue of disinformation is increasingly becoming something that security teams are expected to address,” Melanie Ensign, security, privacy, and engineering communications lead at Uber, said in a discussion of disinformation at the Enigma conference here Wednesday.

In the context of information security, the idea of identifying vulnerabilities and weaknesses is typically applied to systems, applications, and networks. But there’s just as much value in assessing an organization’s non-technical weaknesses to find blind spots and biases that could be exploited by adversaries to influence behavior. Adversaries devote time and resources to research targets and look for opportunities, so knowing where those soft spots might be and why an adversary might be targeting an organization can help security teams prepare and defend against a disinformation campaign.

“Understanding the motivations of our adversaries is key to understanding the defense and identifying our own vulnerabilities and weaknesses,” Ensign said.

As with offensive cyber groups, there’s a variety of actors involved in running disinformation campaigns. And as with cyber operations, the most effective and competent actors are nation-states. They not only have the resources but also the patience to develop long-term campaigns to achieve their aims. At the top of the pyramid of those actors is Russia, which has a reputation as one of the more active and effective disinformation actors today.

“Russia is the most sophisticated of the adversaries that I’ve tracked. They have an extraordinary commitment to the long game that we haven’t seen out of other state actors,” said Renee DiResta, technical research manager at the Stanford Internet Observatory, who has done extensive research on disinformation and influence campaigns by the Internet Research Agency and other Russian actors.

While the organized disinformation campaigns make up much of the problem, Uber’s Ensign said portions of the security community contributes to the problem as well.

“What’s most important is recognizing how we as a security community are contributing to this problem as well. We’re quick to judge and we have knee-jerk reactions to things,” she said.