Security news that informs and inspires

Toward a Framework for Misinformation Campaigns

VANCOUVER--Although it’s thousands of years old, the concept of misinformation probably didn’t intrude into most people’s daily lives until quite recently. The Internet has taken the concept, democratized it, and weaponized it, to great effect in both political and more pedestrian situations. But there’s a small group of researchers, data scientists, and security professionals working to build tools and collaborative communities to help organizations and individuals recognize and defeat misinformation campaigns.

“The Internet we have is being manipulated for both money and power, and that’s not fair. There’s been a lot of admiring of this as a problem and not so much in the way of building solutions,” Sara-Jayne Terp, a data scientist and consultant who has been studying the misinformation problem for several years, said during a talk at the CanSecWest security conference here Wednesday.

“I really have a problem with people creating artificial debate for money.”

Misinformation can mean a lot of different things, but in the context of the modern Internet, it’s generally used to describe campaigns that an organization or individual runs to influence another group by providing false or misleading information. That can come in the form of simple social media posts by loosely connected people, or much more sophisticated coordinated efforts by governments or political groups comprising years-long media, social media, and real-world operations. If they’re familiar with it at all, many people probably think of misinformation campaigns in relation to elections, specifically the 2016 United States presidential election and the influence that Russian-affiliated groups exerted leading up to the election.

But misinformation is used much more broadly and is a far older tactic than that. Its use has evolved and expanded tremendously in the last few years, thanks to the dominance of social media as a major form of communication for many people.

“So why is this all suddenly happening now? It’s not. People have been misinforming each other for as long as people have been communicating with each other. The difference is that the things that were once available to nation states as tools of influence suddenly became available to everyone,” Terp said.

“Now, Kim Kardashian can influence national policy because she has the same tools available to her as nation states do. There are people for hire who will misinform you.”

“All cyberspace operations are based on influence. Down at the bottom, it’s all humans."

Those people have a powerful set of tools at their disposal and thanks to the global reach of the Internet, can reach out anywhere around the world to spread whatever their message may be. They also communicate with each other and share tools and techniques. But the people and organizations trying to identify and defend against misinformation campaigns don’t have the same level of communication or a similar toolset. Terp, working with researchers at SOFWERX, a collaborative project of the U.S. Special Forces Command and Doolittle Institute, is trying to remedy that by developing a framework for misinformation that’s similar to the myriad ones that exist for information security attacks and defense. Using the MITRE ATT&CK framework as a model, Terp is building a framework that includes elements of a misinformation campaign, along with vocabulary, and tactics and techniques. The team plans to release a white paper on its work in June, and Terp said that while it may not be immediately obvious to security professionals, the misinformation problem has a lot of overlap with infosec, especially when it comes to the human part of the equation.

“All cyberspace operations are based on influence. Down at the bottom, it’s all humans. If you look at the classic definition [of misinformation], infosec has always been in there. You just know it as social engineering,” she said.

“You can’t have meaningful conversations unless you’re speaking the same language. We want that lingua franca. We want people to be able to talk to each other because we know that the other side is doing this. We know that the people creating misinformation campaigns are watching the people who are tracking them. It’s an adaptive situation. It’s a call and response.”

As part of the effort to build a misinformation analysis framework, Terp and her collaborator CDR Pablo Breuer of the U.S. Navy and U.S. Special Operations Command have studied a variety of sources and campaigns, and not just the successful ones. They’ve also been looking at the campaigns that have failed, trying to deduce what went wrong.

“Failures are really useful because they tell you something about resilience and the actions you can take to become more resilient,” Terp said.

Given how quickly adversaries change and adapt, Terp sees quite a bit of urgency in her work.

“It won’t always be Russia. When they get what they want or they go away, we will have hundreds of people with these skills looking for things to do,” she said.

“It took years to build the infosec frameworks. We don’t have years.”