SAN FRANCISCO--Since the last presidential election, awareness levels and concern about misinformation and propaganda campaigns has been increasing steadily, especially now as the next campaign draws closer. But information manipulation is not just a seasonal problem, and it can affect a wide range of organizations, enterprises, and individuals.
Much of the discussion of misinformation, disinformation and manipulation campaigns since the 2016 election has been focused on how Russian groups and individuals used social media platforms to influence sentiment. Both Facebook and Twitter have come under scrutiny from government agencies and outside critics for how they handled foreign influence operations on their platforms and whether they were too slow to respond. And both companies have made changes to the way that they identify and remove inauthentic accounts and content since then, but that is certainly not the end of the story. Bad actors change their tactics and adapt as the situation warrants.
“I think what we see is it’s very clear that different threat actors work across different platforms,” Nathaniel Gleicher, head of cyber security at Facebook, said during a panel discussion on the weaponization of the Internet during the RSA Conference here.
“The way you make progress in security is you find a way to impose more friction on the bad actors without imposing it on users. That’s an incredibly big focus that we have right now.”
Facebook, even more than Twitter, has taken much of the heat for not being quick enough to identify and eliminate inauthentic content and misinformation campaigns on its platform. The criticism focuses on the company’s executives and security team being caught off-guard by the scope and sophistication of the misinformation and manipulation campaigns. But countering these kinds of campaigns isn’t always a simple process, especially when there are multiple avenues of influence and manipulation in play. In the case of the 2016 election, there may have been too much focus on one avenue in particular.
“We were looking in the wrong place. We were looking for people hacking Facebook client accounts, and not buying ads at scale that more than half of the American population saw,” said Peter Warren Singer, a strategist at New America.
It’s not just the platform operators that have taken a hit over foreign influence operations; United States intelligence agencies also have come under criticism for not seeing what was happening earlier. But Rob Joyce, a senior adviser at the National Security Agency, said the intelligence community was aware of what foreign organizations were up to, but were not looking inward, at Facebook and Twitter.
“I don’t think we missed it. The intelligence community has looked at Russian manipulation and influence operations long before there was a cyberspace."
“I don’t think we missed it. The intelligence community has looked at Russian manipulation and influence operations long before there was a cyberspace. There is an understanding of the tradecraft and techniques We’ve watched them advance and move online and it’s through that observation that we’ve been pretty successful,” Joyce said. "Now that we understand what's going on, what do we do about it? That's the challenge.
“There were efforts, but like all of us on platforms and in government and in civil society, we’re trying to shape and react when we’re in the middle of speech. And that’s a pretty difficult place for America to go.”
One of the main outcomes of all the discussions, hearings, and arguments about disinformation and influence operations is a call for more regulation of and transparency into social media platforms’ operations. That idea necessarily involves the issue of free speech and the attendant concept of anonymity online. Both are sensitive issues and involve difficult conversations for government agencies as well as the platform providers.
“We think a lot about authenticity and not necessarily just anonymity. There are very good reasons why people don’t want to describe every single detail about themselves,” Facebook’s Gleicher said.
The NSA’s Joyce agreed, emphasizing the challenges that taking away anonymity can present.
“It’s actually dangerous in some countries. I can’t imagine the department of truth or ministry of truth setting up in our government,” he said. “I do believe people have a right to free speech, but a bot doesn’t. Where we can take away the inauthentic voices of bots, I think we should do that.”