We live in the disinformation age.
Separating fact from fiction is an increasingly difficult task, and often the facts are stranger than the fiction. As influence and disinformation campaigns and attacks become ever more prevalent on social media platforms, various elements of the federal government are planning to address the issue through a number of different tactics.
One of the paths is the legislative one. Sen. Dianne Feinstein (D-Calif.) has introduced a bill that would force social media platform providers to develop policies that require users to disclose the use of bots on their accounts. The legislation is designed to give users of Facebook, Twitter, and similar platforms more transparency about who they’re interacting with online.
Under the language currently in the bill, the Federal Trade Commission would develop regulations to ensure that platform providers have a policy that “requires any user of the social media website that employs an automated software program or process intended to impersonate or replicate human activity online on the social media website to provide clear and conspicuous notice of the automated program in clear and plain language”. The bill also would require social media providers to develop “a process by which the social media provider will take reasonable preventative and corrective action to mitigate efforts by a user to use an automated software program or process intended to impersonate or replicate human activity online without disclosure”.
Bots have been a serious issue on many social media platforms--especially Twitter--for several years. Almost since the beginning of the platform, scammers and criminals have used Twitter bots to push work from home schemes, pharmaceutical scams, and phony investment ploys. More recently, though, political organizations, intelligence agencies, and other groups have made broad use of bots to push specific stories and political viewpoints, and seed ideas among target user groups. Russian operators conducted a broad disinformation and influence campaign ahead of the 2016 election and both Twitter and Facebook have made changes to their policies and platforms in response.
Twitter earlier this month said it had suspended more than 70 million accounts--many of them bots--in May and June alone.
“Technology companies bear primary responsibility for securing their products, platforms, and services from misuse. Many are now taking greater responsibility for self-policing, including by removing fake accounts. We encourage them to make it a priority to combat efforts to use their facilities for illegal schemes,” Deputy Attorney General Rod Rosentein said in a speech at the Aspen Institute last week.
Feinstein said her legislation is a direct response to Russian influence operations and covert political influence campaigns.
“This bill is designed to help respond to Russia’s efforts to interfere in U.S. elections through the use of social media bots, which spread divisive propaganda,” Feinstein said. “This bill would require social media companies to disclose all bots that operate on their platforms and prohibit U.S. political campaigns from using fake social media bots for political advertising.”
Foreign governments are, of course, targeting Americans with propaganda, and have been for many decades.
The second path the government is pursuing involves giving organizations and individuals more information about how influence campaigns and disinformation operations work. In his speech, Rosenstein said bringing such operations out into the light can help defeat them.
“Even as we enhance our efforts to combat existing forms of malign influence, the danger continues to grow. Advancing technology may enable adversaries to create propaganda in new and unforeseen ways. Our government must continue to identify and counter them,” he said.
“Exposing schemes to the public is an important way to neutralize them. The American people have a right to know if foreign governments are targeting them with propaganda.”
Foreign governments are, of course, targeting Americans with propaganda, and have been for many decades, long before social media was a gleam in the Internet’s eye. The methods and mechanisms have evolved, but the goal of influencing political thought and public sentiment and elections is unchanged. How much information the U.S. government plans to give organizations and people targeted in such campaigns isn’t clear, nor is how it might disseminate that data. Google has a system for notifying users whose accounts the company believes are being targeted by state-sponsored attackers, but that’s on a private platform and Google doesn’t reveal the way it detects such attacks.
Achieving the same goal without revealing the techniques used to gather the information or publicly identifying targets could be a challenge, Rosenstein acknowledged.
“In some cases, our ability to expose foreign influence operations may be limited by our obligation to protect intelligence sources and methods, and defend the integrity of investigations,” he said.
“Moreover, we should not publicly attribute activity to a source unless we possess high confidence that foreign agents are responsible. We also do not want to unduly amplify an adversary’s messages, or impose additional harm on victims.”