Security news that informs and inspires

I’m Sorry, *You* Are… The Weakest Link


Last week, we tasked Kyle from our Research and Development team with covering some common themes discussed at Black Hat and DEF CON. We want to bring these issues to both the security community that was in Vegas at the cons and those who kept an eye on the action from the outside.

On a chilly Saturday in February 2011, Aaron Barr, the CEO of security company HBGray Federal announced that he was going to unveil Anonymous’s leaders at BSides San Francisco later that month.

By Sunday night, HBGray Federal had been thoroughly pwned by Anonymous, including emails, profiles of Anonymous members, and Barr’s social media profiles. The hack wouldn’t have been this severe if an Anonymous member hadn’t pretended to be Barr in emails while he asked for password resets, an all-too-familiar form of social engineering, the art of getting what you want from people without them realizing that you’re attacking them. The pinnacle of social engineering is attacking people while they enthusiastically think they’re helping you. People are frequently the weakest links in systems, even after extensive security training. Two talks last week looked at this problem from very different angles.

Confessions of a Professional Cyber Stalker

Ken Westin (@kenwestin) from Tripwire spoke at DEF CON 23 last week about his work as a “professional cyber stalker” to recover stolen property, bust organized crime, and more. Westin discussed various techniques and stories of his experience that demonstrate that people are overwhelmingly the easiest attack vector, whether it’s through social engineering or taking advantage of mistakes they make.

We learned how he has made information-gathering programs for OS X that seem like regular MP3 files. His fake-MP3s even play the song that they claim to be: he writes them in Applescript, and plays the actual MP3 as the first step. Once the song is playing, it appears to be nothing other than a regular MP3, even though arbitrary code is running in the background. All it takes is a forged email with an innocent-looking “.mp3” file and a user who isn’t a security expert.

Mistakes are the biggest source of information in his investigations. For example, Westin has clients pre-install a phone-home program that they can wait to hear from if the device gets stolen. If an attacker takes the device and neglects to wipe it, as soon as it connects to the Internet, his tools run to gather data. With modern laptops that have embedded cameras, his tools also take a picture and sends that, along with network and computer information, to his servers. While things like IP address or WiFi network name usually don’t tell you everything you need to know to recover property, they provide valuable information that can aid a search.

Social engineering is a hacking approach that exploits trust and slip-ups to gain more and more information until the attackers knows enough to execute the overall attack. A recent twist on this has been facilitated by Software-as-a-Service (SaaS) applications, like using Gmail rather than running your own email server, since the company that hosts the application doesn’t know your organization, even though it handles support.

Attackers that can compromise one account may be able to jump to a previously-secure service, given the information they could collect from the first service. A classic example of this attack happened three years ago to Wired columnist Mat Honan. He pieced together which accounts his attackers compromised and how they used them to pivot to the next service. While he immediately knew what he could have done to protect himself, like enable two-factor authentication on his Gmail account and take regular backups of his photos beyond what iCloud does, he admits that he simply didn’t do it. This isn’t so different from the multitude of users who aren’t security experts and either don’t know how to protect themselves or don’t have an accurate feeling of the risks they take by not putting even small efforts into security.

Human Vulnerability Scanning

The security community often discusses pivoting in the sense of jumping from one compromised computer to others that are more secure against external attacks. What we don’t often talk about is how this is can be an exact analog to people. In organizations, the network of relationships is far from what the organizational chart represents, and there are always people who everyone goes to with certain types of questions of problems, even if, say, they’re a leaf node in an organizational chart.

Laura Bell (@lady_nerd), CEO of SafeStack, spoke at Black Hat last week about these systems and how to understand them. IT departments rarely have insight into how their users actually collaborate, especially if there’s a substantial shadow IT presence in the organization. Bell gave an example of a company where IT had been dragging its feet to set up a system for suitable file-sharing between departments. The affected users created a private Facebook group to share the files, since that would get the job done. This workflow persisted for years, entirely invisible to IT. A lack of visibility into how people actually work together means that security audits based on organizational charts or roles in an organization are going to miss a lot of critical information.

Bell presented a project she’s leading, AVA, a project with ambitions to thoroughly test an organization for human vulnerabilities based on how and where people collaborate. The project is still a work in progress, and she emphasized their need to get input and contributions from people at all levels and organizations of all sizes and regulatory burdens.

AVA starts by collecting as much data as possible from as many services as possible without invading privacy, such as LDAP, public Facebook profiles, Twitter, LinkedIn, Exchange, Slack, and so on. From there, tests are performed as a generalized form of internal phishing campaigns (e.g., Phish5), such as sending HipChat messages asking for personal information. The results are then analyzed to provide insight into employees’ patterns of behavior and what the real risks and weak links are. This has the potential to help us understand and measure the human component of security.

Time and time again, both security experts and non-experts alike have fallen prey to a class of attacks that has been around since the Trojan horse and William Thompson, who conned the rich simply by asking them if they trusted him to hold onto their watches and jewelry until the next day, and then just walking away with their valuables.

Just like any other system, human socialization can be attacked too, and we can’t just drop a patch for humans for the next Patch Tuesday and move on. It’s encouraging to see projects like AVA that try to address the human risk as comprehensively as we often analyze our technical risk.