Security news that informs and inspires

Include Deepfakes in Incident Response Plans

By

At 1:07 p.m. on April 23, 2013, the official account of the Associated Press posted on Twitter a message that caused a minor panic in the New York Stock Exchange. "Breaking: Two Explosions in the White House and Barack Obama is injured," the message read. The Dow Jones Industrial Average dropped about 150 points in three minutes. However, there were no explosions, and the post was fake—the hacktivist group Syrian Electronic Army had gotten control of the AP’s account.

A 71-character message on Twitter “erased $136 billion in equity market value” during those three minutes (Bloomberg News). The panic would have likely been far worse, if instead of text, this message had come via video.

Enterprises aren’t yet thinking about the impact deepfake videos could have on their business, said Brian Wrozek, director of information security at Optiv. Recent advances in artificial intelligence and technology have made it possible to create images and videos that look extremely realistic and difficult to identify as being machine-generated.

“Video tends to have more impact than text,” said Wrozek.

Deepfakes used to tarnish the company’s reputation or to paint the executives in a negative light, will be harder to combat because they are designed to feel real and evoke an emotional reaction. When people see a doctored video—of a company acting illegally, such as dumping industrial waste in rivers, or executives acting unethically, such as using racial epithets—they may not notice the very subtle signs that the content has been doctored.

Campaigns involving deepfakes is a credit risk for targeted organizations because of “the potential for tarnished reputations and lost business,” Moody’s Investors Service said in a recent report. The enterprise with the damaged reputation may lose customers, have a harder time hiring talented employees, and struggle to raise capital from wary investors. The investment service warned of “more pernicious disinformation campaigns against companies.”

"Imagine a fake but realistic-looking video of a CEO making racist comments or bragging about corrupt acts," said Leroy Terrelonge, AVP-Cyber Risk Analyst at Moody's Investors Service.

Disinformation Costs Money

Disinformation campaigns have been around for a while in financial markets in the form of rumors, fake releases, and misleading news articles. For example, back in 2016, a fake press release claimed French construction giant Vinci had fired its chief financial officer after discovering accounting irregularities to the tune of several billion Euros. Vinci lost $5 billion in value over the half hour period before recovering. In another case, a former employee of Internet Wire, a news release distribution service, sent out a phony release in 2000 claiming the CEO of data storage equipment-maker Emulex had resigned and the company’s quarterly earnings were going to be restated from a profit to a loss. The hoax cost investors nearly $110 million and wiped out estimated $2.2 billion in the market value of Emulex in minutes.

Fake videos and images would also be able to “create a high amount of volatility in securities prices, generating losses for investors who sell” during the course of the campaign, Moody’s said.

Previous text-based schemes roiled financial markets, but securities prices generally returned to previous levels within hours after the company debunked the information and convinced investors the campaigns were fake. These campaigns didn’t linger long enough to negatively impact the companies’ credit because the companies had specific things they could do to convince the public and investors of the truth. They could counter modified text or audio clips by releasing full transcripts or recordings, or showing supporting evidence such as email exchanges and financial statements.

Debunking would be much harder with AI-generated content, as they are designed to convince sophisticated computer algorithms they are legitimate. Humans don’t stand a chance. In the past, people relied on the fact that it was possible to tell when photographs, audio, and video had been altered—but recent advances in technology means those cues are harder to find. The near-realism means they “require more time to disprove,” are harder to fully disprove, and “could cause uncertainty and doubts to persist,” Moody’s warned. At some point, debunking may become “humanly impossible.”

A damaged reputation could be a “severe credit negative,” Moody’s Terrelonge said, as investors may not want to give capital. The Moody’s report cited a 2015 study that found an inverse relationship between a company's reputation and its corporate bond credit spread—companies with a higher reputation borrow at lower interest rates. Companies with stronger reputations faced less stringent debt covenants and were less likely to be targets of SEC investigations.

Types of Deepfake Schemes

The impact deepfakes would have on enterprises isn’t some far-flung future scenario for Optiv’s Wrozek, as he believes a deepfake video incident could happen as soon as 18 to 36 months from now.

For example, ransomware attacks could evolve to include deepfakes, Wrozek warned. There is no reason for attackers to bother with encrypting machines when they could just send a ransom note to the executives or the board threatening to release a highly damaging video if the amount is not paid. Wrozek said this scenario could occur as soon as 18 months.

They can also be used as part of hacktivist campaigns, with doctored content used to stoke resentment against companies for whatever they did (or didn’t do), Wrozek said. It would be harder for companies to deny the content because the perception would that “of course the company is going to deny it,” and that the defense is “all about PR.”

The most challenging thing about deepfakes is the fact that it is completely out of the organization’s control. The content is most likely not hosted on their platform or servers, but rather on someone else’s machine. This means the targeted organization is dependent on someone else to help resolve the situation.

Deepfakes is essentially a form of social engineering, Wrozek said, as the people are being manipulated.

Layered Approach to Deepfakes

The organization should not treat deepfakes as a problem to solve with technology, as it requires a cultural shift, Wrozek said. Corporate security, incident response, legal, and public relations all have to be involved in combating disinformation. Individuals need to learn ways to evaluate what they are looking at critically. The general public is aware that images can be doctored with Photoshop and similar software, and they need to be taught what AI can do. It is easier to detect fake content when people know what they are looking for.

Researchers need to develop new forensics techniques that are capable of distinguishing when something was AI-generated. Current efforts focus on generating biometric models of behavioral quirks of public figures since that would likely not be captured in a deepfake rendering. Another is looking for digital artifacts left behind by the AI. Researchers have also ready developed algorithms that can detect inconsistencies, such as in lighting, distance to the camera, and even in the position of the head.

Organizations need to incorporate social media monitoring as part of their security defense and regularly monitor social media channels and websites for disinformation, and promptly request those platforms remove fake information.

This is where the cultural shift comes in.

Monitoring social media channels is not something security teams typically have on their to-do list. That is usually the purview of marketing or public relations. Security teams would need to work with these teams to figure out how to handle cases as they occur. They need to be prompt about requesting content platforms remove the hoax content. They can proactively publish counterpoints to convince the content provider for removing the content. They have to know who to contact at these third-party platforms, as well.

All this falls under incident response. Just as there are incident response plans to handle malwre, there should be one to address deepfakes, Wrozek said. Basketball players practice throwing free throws even though they are worth only one point, because practicing the smallest things means one less thing to worry about when in a high-stress situation. The last thing anyone needs is scrambling to figure out who to alert on the security team, or who should make the call to the content platform, when the threat of a damaging deepfake video arrives.

Technology is changing rapidly, and enterprises can’t just shrug off the potential dangers. Even just a year ago, someone would have to have serious programming chops in order to be able to create deepfakes. It used to be a heavy resource-intensive effort.

“Now there’s a toolkit,” Wrozek said.