<![CDATA[The Duo Blog]]> https://duo.com/ Duo's Trusted Access platform verifies the identity of your users with two-factor authentication and security health of their devices before they connect to the apps you want them to access. Wed, 11 Apr 2018 08:30:00 -0400 en-us info@duosecurity.com (Amy Vazquez) Copyright 2018 3600 <![CDATA[Calling All Developers: Building Together With Duo's Tech Partner Program]]> rsun@duo.com(Ruoting Sun) https://duo.com/blog/calling-all-developers-building-together-with-duos-tech-partner-program https://duo.com/blog/calling-all-developers-building-together-with-duos-tech-partner-program Product Updates Wed, 11 Apr 2018 08:30:00 -0400

Hi folks, I’m excited to announce today that we are officially launching the Duo Technology Partner Program. We’ve been hard at work designing and building this program from the ground up, so I’m thrilled to share with you what we have planned for our developer community.

First, why are we doing this? Over the past few years, Duo has worked with hundreds of vendors to support our customers’ various security initiatives, from simplifying two-factor authentication (e.g. our U2F integration with Intel) and improving device security posture (e.g. our Trusted Endpoints integration with VMware Airwatch), to more advanced use cases like using Duo as a critical step in security response workflows (e.g. building Duo into the securitybot workflow) or enforcing granular access management on firewalls (e.g. our integration with Palo Alto Networks).

Regardless of the use case, one thing is clear: our customers consistently rely on us to provide best-in-class interoperability with the rest of their IT investments.

Duo Tech Partnership Ecosystem

Partnering With Akamai to Deliver Better Security

In fact, we’re doubling down on our partnerships to help customers adopt a modern, zero-trust security architecture. We’re proud to welcome Akamai to our partner community. Duo and Akamai are working together to offer a best-in-class application delivery and security solution for customers looking for a more secure remote access alternative to VPNs. Visit the Duo and Akamai partnership page to learn more.

For Our Customers

Our customers are our primary motivator for formalizing how vendors will partner with Duo. Ensuring a delightful customer experience, from initial deployment to ongoing management, is of the utmost importance here at Duo.

On average, it takes less than 20 minutes to set up an integration between Duo and any given application, compared to the days or weeks it takes with other authentication solutions. This is something that we are extremely proud of and consider a critical part of the overall customer experience.

This program will help us provide even faster and easier deployments for a larger number of applications. It also enables us to address new customer use cases as Duo grows beyond authentication.

Program Benefits

For our developer community, the benefits are equally exciting. As a partner, you get access to Duo’s network of 10,000-plus customers and 2,000-plus resellers. You’ll build and collaborate with hundreds of other developers who are also creating on the Duo platform.

You’ll get access to a full Duo developer account to build, test, and certify every integration you create with us. You’ll even engage in joint sales and marketing activities with us to drive interest for your own product; our partners have found that adding Duo generates instant security value for their own customers.

Interested in working with us? Here are a few ways to learn more:

That’s all for now. We can’t wait to see what you build with us.

Ruoting Sun,
Head of Technology Partnerships

]]>
<![CDATA[Securing Remote Access With Duo and Akamai's Zero-Trust Integration]]> vgupta@duo.com(Vishal Gupta) https://duo.com/blog/securing-remote-access-with-duo-and-akamais-zero-trust-integration https://duo.com/blog/securing-remote-access-with-duo-and-akamais-zero-trust-integration Product Updates Wed, 11 Apr 2018 08:30:00 -0400

Summary:

  • Organizations need to provide easy and secure remote access to an increasingly mobile and dispersed workforce.
  • Traditional solutions like virtual private networks (VPNs) are complex, costly and deliver a poor end user experience. They also increase the risk for security breaches.
  • Duo’s integration with Akamai Enterprise Application Access (EAA) uses a zero-trust security architecture to offer easy and secure remote access to on-premises apps - without the complexity and pain of a VPN.
  • Duo’s integration with Akamai EAA is available with Duo MFA, Duo Access and Duo Beyond.

Many of us have experienced the convenience of working from any location, using devices of our choice, and connecting through the internet to our workplace resources and tools. Organizations around the world not only support this modern workstyle for their employees, but increasingly support it for their community of partners, contractors and remote users.

Offering remote access, however, adds new security challenges. Traditional solutions like virtual private networks (VPNs) can be complex, costly to deploy and maintain, provide a poor end user experience (have you ever tried using a VPN on a mobile device?), and increase the surface area for security incidents or potential breaches. They also lack the ability to segment privileged access based on users or applications.

So what is the alternative to traditional remote access solutions?

We are excited to offer a better approach to remote access: Duo now integrates with Akamai Enterprise Application Access (EAA) to offer customers a more secure and user friendly remote access alternative to VPNs.

Zero-Trust Security: A Better Approach to Remote Access

With a Duo and Akamai EAA solution, organizations can adopt zero-trust security architecture for remote access. They can shed the dependency on the network perimeter as a measure of trust while adopting strong authentication and authorization of every user and every device for every access request.

Users get access only to the applications they are authorized to use only after strong verification of their identity and the trustworthiness and security hygiene of their device. Users access applications from a cloud-hosted, user-friendly application portal, and do not require an agent on their devices.

Akamai and Duo Prompt

Here’s how it works:

  • When a user tries to access an application delivered by Akamai’s EAA or the EAA service itself, they are presented with the Duo Prompt.
  • At the time of login, Duo verifies the user through multi-factor authentication (MFA).
  • Duo also verifies the security posture of the device based on the version of OS, browser, plugins etc.
  • Duo also checks if device is corporate-managed or not to establish its trustworthiness.

Only verified users with trusted and compliant devices are allowed to access the application.

Easy-to-Deploy, Native Duo and EAA Integration

Duo is natively integrated into the EAA administration portal, providing administrators out-of-the-box seamless configuration of fine-grained access policies based on user and device trust, and the security posture of the device.

Even the largest enterprises can onboard new external users – contractors, business partners, remote full-time workers – in minutes instead of days or weeks.

Duo is always looking for ways to securely enable customers to embrace new technologies to adapt to the changes in how we work. Duo’s integration with Akamai EAA is another example of helping organizations adopt a zero-trust security architecture for remote access for the modern workforce.

Bye for now,
Vishal

]]>
<![CDATA[Catch Up With Duo at the 2018 RSA Conference]]> noelle@duo.com(Noelle Skrzynski) https://duo.com/blog/catch-up-with-duo-at-the-2018-rsa-conference https://duo.com/blog/catch-up-with-duo-at-the-2018-rsa-conference Industry News Tue, 03 Apr 2018 08:30:00 -0400

The 2018 RSA Conference is just around the corner, and Duo Security is heading to the West Coast for its fifth year from April 16-20. As always, the conference will be at the Moscone Center in San Francisco, California.

The RSA Conference is the world’s largest information security conference, drawing in more than 45,000 attendees to share insights on current IT security issues, attracting the world’s best and brightest in the field.

Duo Booth RSAC 2018

Visit Duo at Booth #1427

Don’t miss Duo at our newly designed Booth #1427, located in the South Hall - while you’re getting a demo or grabbing some swag, immerse yourself in our four-screened, animated video experience!

We’ll be there from:

  • 5-7 p.m. PST on Monday (Welcome Reception!)
  • 10-6 p.m. PST on Tuesday and Wednesday
  • 10-3 p.m. PST on Thursday

We’d love to chat in person, as well as answer any questions you might have. To set up a meeting in advance with a Duo rep during the RSA conference week, please reach out to your Duo salesperson. While you’re at the conference, just meet us on the show floor!

Looking for a pass to the RSA exhibit hall? Take advantage of our special offer for a complimentary exhibit hall pass by applying our expo code X8EDUOSE to the discount field when you register through the RSA Conference site.

Party With Duo!

Duo Party at RSAC

It’s back for another year - get ready for our Duo Beyond Party, happening on Tuesday, April 17, from 7-11 p.m. in the SOMA district. We’ll have cocktails, mocktails, dueling DJs Keith Myers and Selina Style, and a special appearance from the Access of Evil. That’s right - we’re letting ‘em out for the night to mingle and take pictures with you. But don’t worry; we’ll keep an eye on them to make sure your data stays safe!

RSVP here to let us know you’re coming!

The fun doesn’t end on Tuesday - be sure to head over to Local Edition on Wednesday, April 18, to celebrate the launch of Decipher, an independent editorial site that takes a practical approach to covering information security. Party with Duo, Dennis Fisher (Decipher Editor in Chief) and Fahmida Y. Rashid (Decipher Senior Managing Editor) from 7-11p.m.

As always, RSVP to save your spot!

RSAC 2018: The Talks

Some of the top keynote speakers this year include:

  • Monica Lewinsky, social activist, writer, public speaker, advocate for a safer social media environment by addressing topics such as survival, resilience, digital reputation and equality
  • Tim Urban, creator of the Wait But Why blog, gave the most-watched TED talk of 2016, author of dozens of various articles on subjects ranging from why we procrastinate to why we haven’t yet encountered alien life forms
  • Jane McGonigal PhD, world-renowned alternate reality games designer, inventor of SuperBetter, has consulted and developed internal game workshops for more than a dozen Fortune 500 and Global 500 Companies
  • Reshma Saujani, Founder and CEO of Girls Who Code, a national nonprofit organization working to close the gender gap in technology; has been named one of Fortune’s World’s Greatest Leaders, Fortune’s 40 Under 40, and a WSJ Magazine Innovator of the Year

The RSA Conference will also feature sessions each day covering:

  • Data Security & Privacy
  • Hackers & Threats
  • The Human Element
  • Governance, Risk & Compliance
  • Application Security
  • CISO Viewpoints
  • Industry Experts

View the full RSAC agenda here.

Attend Duo's Talks at RSAC 2018

We’re thrilled to have several Duo folks presenting at the conference this year. Make sure to mark your calendar and reserve a seat on your RSAC agenda for the following sessions:

Realizing Software Security Maturity: The Growing Pains and Gains, on Tuesday, April 17 at 3:30 p.m., presented by Mark Stanislav, Duo's Director of Application Security, and Kelby Ludwig, Senior Application Security Engineer.
Abstract:
Software security is often boiled down to the “OWASP Top 10,” resulting in an ineffective sense of what maturity-focused, comprehensive application security could be like. How then should an organization consider building a holistic program that seeks to grow in maturity over time? Come hear how one team has taken on this challenge and learn what has, and has not, worked on their own journey.

Pragmatic Perimeters: Making "Zero Trust" and "BeyondCorp" Work for You, on Wednesday, April 18 at 11:45 a.m., presented by Wendy Nather, Duo’s Director of Advisory CISOs.
Abstract: The old perimeter is being supplemented, as the firewall shouldn’t be the only policy enforcement point. Discuss the “zero trust” and Google “BeyondCorp” models with your peers: how to adopt this new way of looking at users, endpoints and applications; what resources are needed; and which risks it mitigates.

Corpsec: “What Happened to Corpses A and B?”, on Friday, April 20 at 9:00 a.m., presented by Chris Czub, Duo’s Senior Information Security Engineer.
Abstract: Living BeyondCorp comes with its own challenges. This talk will dive into how Duo gets our hands around difficult problems regarding the security and management of cloud services and endpoints internally. This session will cover technical details of our security orchestration and automation approach, cloud service monitoring, and chatops-driven endpoint application whitelisting strategies.

“The System...Is People!”: Designing Effective Security UX, on Friday, April 20 at 11:30 a.m., presented by Zoe Lindsey, Duo’s Advocacy Manager.
Abstract: In an organization, people make up a complex system that is crucial for security teams to understand. Education, messaging and culture are all “inputs” for this system, and user behavior is its output. This session will cover how the actions and values an organization rewards—individual bias, training methods and the security team/user relationship—can improve or compromise security effectiveness.

OURSA (Our Security Advocates) Conference

In reaction to concerns about the lack of ethnic/gender diversity in the keynote speaker line-up for the RSA Conference comes the OURSA conference, happening on Tuesday, April 17. This one-day, single-track conference is committed to bringing together a diverse set of experts from across the security, trust, and safety community to focus on the following four topics:

  • Advocating for High-Risk Groups (Chair: Adrienne Porter Felt)
  • Applied Security Engineering (Chair: Aanchal Gupta)
  • Practical Privacy Protection (Chair: Lea Kissner)
  • Security Policy & Ethics for Emerging Tech (Chair: Amie Stepanovich)

Speakers at this event include members of Google, the New York Times, the American Civil Liberties Union, Twitter and more.

]]>
<![CDATA[A Security Audit of Third-Party AWS S3 Tools]]> spiper@duosecurity.com(Scott Piper) https://duo.com/blog/a-security-audit-of-third-party-aws-s3-tools https://duo.com/blog/a-security-audit-of-third-party-aws-s3-tools Engineering Fri, 30 Mar 2018 08:30:00 -0400

S3 buckets are a way of storing files on Amazon Web Services (AWS). These are continually making the news for being found with sensitive information in them that have been made public. There are legitimate reasons to make S3 buckets public, such as hosting the content for a public website.

However, many of these incidents appear to be unintentional. There are many reasons why this might be the case, but we decided to investigate one hypothesis, that perhaps one or more third-party tools used to work with S3 buckets are contributing to this problem.

There are a handful of tools people use to work with S3 buckets that were not developed by Amazon. Our hypothesis was that perhaps one or more of these tools are automatically making these S3 buckets public, or perhaps contain wording for an action that is misleading and results in the bucket being made public. We determined that these tools are not a contributing factor to this problem.

In summary, we found:

  • None of the tools reviewed made S3 buckets public without intentional actions by the user.
  • One third-party tool was using unencrypted HTTP by default.

Unencrypted HTTP and Authenticated Users

In our audit, we found one tool that was using unencrypted HTTP by default, and after requesting they change this, they're now using HTTPS by default.

We looked at the following tools:

Most people interact with S3 buckets either through the web console, the CLI developed by AWS, custom code that uses one of the AWS SDKs, or one of those tools.

None of the tools automatically marks a bucket as public. The three tools do have the ability to mark an S3 bucket as public, but the wording in these tools is similar to the AWS web console. Like the AWS console, these tools use the word "Everyone" to mean a bucket is public.

Until a few months ago, the web console also included the option to easily grant access to "Authenticated users," which was misleading as this meant any authenticated user to any AWS account, and not just the users within one's own account.

AWS has since removed this option and more proactively adds warnings around this option in the documentation for this service. The S3 tools that we looked at are still using the phrase "Authenticated users" as an option for granting access, with S3 Browser recently clarifying the wording to read "Any AWS Users."

Making Bucket Public in CyberDuck Making a bucket public in CyberDuck

Making Bucket Public in CloudBerry Making a bucket public in CloudBerry

Making Bucket Public in S3 Browser Making a bucket public in S3 Browser

Encrypted Communications by Default

While performing this research, we noticed that one tool, S3 Browser, did not use encrypted communication by default. All traffic was sent and received over unencrypted HTTP with S3 Browser unless a box was checked on configuration. We reported this to the creators of S3 Browser who quickly put out a new release that changes this default setting. This change was made in S3 Browser version 7.6.9.

For older versions, ensure you check the box for "Use secure transfer." Additionally, you can enforce SSL for accessing an S3 bucket by using the the condition "aws:SecureTransport" in your AWS policies. We recommend users of S3 Browser consider upgrading to the latest version to take advantage of encrypted transport.

S3 Browser - Unencrypted Communications S3 Browser <7.6.2 did not use encrypted communications by default.

Conclusion

None of the tools we looked at are automatically making S3 buckets public. Unfortunately, users are doing this themselves. Users of S3 Browser have, by default, been accessing S3 buckets over unencrypted HTTP, so if you use that application, we recommend you upgrade to a newer version and ensure you are using SSL/TLS when accessing S3 buckets.

We also believe there continues to be opportunities for AWS, tool maintainers, and security practitioners to communicate the potential risk of unsafe configurations. AWS has made excellent UI changes in the past few months to more clearly identify when an S3 bucket has been made public.

Stay in Touch!

If you're interested in protecting the public by identifying and fixing vulnerabilities on a broad scale, apply to join the Duo Labs team at https://duo.com/about/careers.

]]>
<![CDATA[Security Report Finds Phishing, Not Zero-Days, Is the Top Malware Infection Vector]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/security-report-finds-phishing-not-zero-days-is-the-top-malware-infection-vector https://duo.com/blog/security-report-finds-phishing-not-zero-days-is-the-top-malware-infection-vector Industry News Thu, 29 Mar 2018 08:30:00 -0400

The latest Internet Security Threat Report (ISTR) from Symantec covers the past year in review when it comes to financial and banking security trends, the most common malware infection vectors, mobile malware and ransomware trends and much more. If you don’t have time to read the nearly 90 pages, then skim over some of the top takeaways below:

Infection Vectors & Network Compromise Techniques

According to the report, the top infection vector of malware is spear phishing (used by 71% of organized groups in 2017). Only 27% of the 140 targeted attack groups tracked by Symantec have used zero-day vulnerabilities at any point in the past to infect users.

Top Malware Infection Vectors

Meanwhile, attackers often use lateral movement, a key phase that helps them explore and move across a network to infect computers/targets of interest.

...stolen credentials were the most commonly seen lateral movement technique employed.

With hacking software tools, attackers can obtain credentials from compromised computers, as well as use password hash attacks to authenticate into other computers and servers. Another lateral movement technique used was exploiting open network shares.

If you’re going to be attacked, the chances are that initial compromise … is going to be created by social engineering rather than anything technically sophisticated such as exploit of a zero-day vulnerability.

This concept has been long-touted in infosec - the need to focus on security basics, and on controls around identity, which includes the user and their device, to address social engineering risks.

Phishing attempts and malware delivered via phishing emails can lead to stolen credentials and potentially compromised or malware-infected devices, especially if those devices are unpatched and unprotected against known vulnerabilities.

By pairing multi-factor authentication (MFA) with endpoint policies and controls, you can help secure this new ‘identity perimeter’ against initial attempts to gain remote access to your networks - also referred to as zero-trust in Google’s BeyondCorp model. Learn more about this new security model.

Mobile & Malware Trends

When it comes to mobile devices, only 20% of Android devices were running the newest major version, with 2.3% on the latest minor release. While it’s possible that devices can be running the latest patch on out-of-date systems, keeping user devices as up to date as possible is advised.

The report also found a 54% increase in mobile malware variants, and a 46% increase in new ransomware variants from 2016 to 2017, showing that there’s no sign of malware development slowing any time soon.

Trojans & Financial Security Trends

Other trends include increased cryptocurrency coinminers and banking/financial Trojan activity. One financial Trojan that made a comeback with increased activity in the second half of 2017 is Emotet - malware that was delivered via large email campaigns, capable of stealing information from infected devices and adding infected devices to the botnet.

Aside from stealing online banking credentials, other Trojans have been seen stealing cryptocurrency wallet logins and other account details. The banking Trojan Dridex will now check the software installed on devices it has infected; enabling remote access for larger fraud attempts if accounting software is installed.

Intelligence Gathering & Industrial Control Systems

In an analysis of targeted attack groups, Symantec found that the majority of groups are focused on intelligence gathering - 90%, to be exact. This makes sense as state-sponsored actors tend to not be directly financially motivated, but rather interested in collecting data that could prove valuable - such as military info or technological advancements.

The ISTR also found a 29% increase in industrial control system (ICS)-related vulnerabilities. Once a month, there was at least one large software update supply chain attack reported in 2017 - that means attackers hijacked legitimate software updates with a malicious version with intent to distribute; an attack that could work similarly to compromise internet-enabled devices and industrial controller components.

Attacks against energy and critical infrastructure organizations have been seen recently, as reported by a technical alert released by the U.S. Department of Homeland Security (DHS) and Federal Bureau of Investigation (FBI). The multi-staged intrusions reflect all of the same attack vectors outline in the ISTR - malware, spear phishing and remote access.

After gaining remote access to energy sector networks, the threat actors were able to move laterally to gather information about industrial control systems, including configuration information on how to access ICS systems on the network.

Learn more about industrial control system, energy and critical infrastructure attacks in:

]]>
<![CDATA[IT Modernization is a Road Trip - Don’t Forget to Pack Your Security]]> srazier@duo.com(Sean Frazier) https://duo.com/blog/it-modernization-is-a-road-trip-dont-forget-to-pack-your-security https://duo.com/blog/it-modernization-is-a-road-trip-dont-forget-to-pack-your-security Industry News Wed, 28 Mar 2018 08:30:00 -0400

“Life moves pretty fast, if you don’t stop and look around once in awhile, you could miss it.” - Ferris Bueller

On February 27th, the Office of Management and Budget (OMB) released its final memo outlining the implementation of the MGT (Modernizing Government Technology) Act. And now, poof! Everything is modernized, right?

Er… no.

Washington, D.C. has been buzzing with “IT modernization” talk and it feels like this time it might actually happen. It kinda feels like Christmas. Agencies have never stood still, but they’ve always been hampered by the legacy boogie man. “How does this technology do that?” and, “how does that technology do this,” or “how do I shoehorn this into my legacy infrastructure?” and, as usual, agencies are struggling to keep up even when most folks who are talking about IT modernization are themselves not so sure how all these things will work out.

One thing is clear, things are moving faster than they have in a while, and while this is certainly exciting, it also brings up a concern. In all this swirling wind of modernization goodness, we have to stay vigilant to the fact that the old security models don’t fit this new model.

I think I’ve seen this movie before.

Way back in the mid 90s, I was working at this little company based in Mountain View, CA, called Netscape. It was right around the dawn of the internet age and this time feels a lot like that, and I do not say that lightly. Back then, we were moving at lightspeed to disrupt everything and anything. The web was going to redefine everything we did as employees, as citizens, and as people.

And it did. Collaboration, commerce, governments; you name it. The internet age gave way to the cloud, and, along the way, proved out some amazing things. For example, to do transactions over the internet, you needed security.

Naturally, a thing like SSL (Secure Socket Layer, aka TLS - Transport Security Layer), a little thing that was invented at Netscape, had to be built. We started out with non-secure HTTP, which worked well for early academics sharing esoteric thoughts or home cooks sharing pie recipes, but it didn’t work so well for banking transactions. It turns out, people wanted to protect those things, so security had to evolve.

SSL, I should point out, is, was, and always will be the most successful implementation of PKI (public key infrastructure) in the world. For those who have not been exposed to PKI, first, consider yourself lucky; second, it’s the key sharing mechanism that allows you to only share the public part of that key while keeping the private part, well, private. SSL/TLS is the encryption technology that is protecting “the pipe.” When you see that little padlock and you’ve used HTTPS to get to a website - that’s SSL. Strong in its protection and elegant in its simplicity.

The next evolution of PKI, however, was not for the faint of heart. This was defining and implementing the user side of the equation. It was so hard that only our brave colleagues in the public sector and people with enough grit (and bags of money) were able to implement it. As well as a few private sector companies … but very, very few. The reason for this? It is/was hard, really hard, and really expensive.

For many years, we've been working on the user side of the equation in our respective public sector agencies for good reasons. It started in the early 2000s with the Department of Defense (DoD) moving folks to the CAC (Common Access Card), away from passwords and small-use-case Fortezza cards. In 2004, after several high-profile civilian agency breaches, President Bush (the 2nd one) issued HSPD12 (Homeland Security Presidential Directive 12) to move civilian agencies in the same direction by adopting and deploying PIV (Personal Identity Verification) cards. This was a noble pursuit and was working pretty well for physical access things like “can I get into this building” and even for some logical access things, provided the laptop had a smartcard reader.

But this all changed when……..

The iPhone broke the world. In the best possible way.

When mobile came on the scene, all the rules changed. We all now have computers in our pocket. We use these computers to access all kinds of things, and it even forced a change on the desktop/laptop/tablet side of the spectrum, forcing those endpoints to be more mobile in their usage and security models.

"Mobile and cloud are like peas and carrots.” - Forrest Gump

The other thing that changed everything was the “where” with regards to your data domicile. Turns out mobile devices are very cloud-centric in their data storage. You can test this out by putting your phone in airplane mode and see what you can get done. Mobile and cloud changed everything. The things we were used to doing for security don’t work as well. Things like virtual private networks (VPNs) and CAC/PIV cards.

So just like back when we had nothing, we have to reinvent this thing. We tried to reinvent the identity piece with Derived PIV Credentials (DPC) or PIV-D, but this isn’t working very well. PIV-D is just PKI 2.0 and is still a heavy lift for where we’re headed, and while I think we have the credential creation and enrollment pieces figured out, the credential usage workflows are far from working.

What about VPNs and network security? Mobile and cloud have forced us to rethink that too. Google did a great public service when it fully outlined and documented its journey to a “zero-trust network” security model. You can read a great run-down of this model, written by our own resident security genius, Wendy Nather, HERE. She gives it more justice than I ever could, but the gist is, the perimeter as we know it can’t be the only place where security decisions are made and become less relevant the more data and users exist outside your network. You know, mobile and cloud.

With all of this going on, it’s lucky for us folks like NIST are paying attention, especially on the user identity, access and authorization side. The guidance is really coming into focus to promote easy and effective security for the public sector. Thu Pham, our resident blog goddess, did a much better job than me in boiling this down when she wrote a few observations at the end of 2017 that are starting to enter our conversations today:

The fact that NIST is providing guidance in SP-800-63-3 to give agencies more modern choices will help them with this journey. Allowing for things like biometric identity authentication on a trusted device, and the use of a FIPS-validated hardware token like the Yubico Yubikey for replacement of a CAC or PIV card. This will make agency life a lot easier as they move forward in the IT modernization journey. IT modernization is about using COTS (commercial off-the-shelf) technologies and services to give agencies the ability to be more agile in deploying and managing their environment and get better security in the bargain.

We all have computers in our pocket, why can’t we have strong authenticators in our pocket? The answer is, we already do. And that leveraging of existing, strong, “good enough for commercial market” technology is what this journey is all about.

]]>
<![CDATA[Microcontroller Firmware Recovery Using Invasive Analysis]]> mdavidov@duosecurity.com(Mikhail Davidov) https://duo.com/blog/microcontroller-firmware-recovery-using-invasive-analysis https://duo.com/blog/microcontroller-firmware-recovery-using-invasive-analysis Duo Labs Mon, 26 Mar 2018 08:30:00 -0400

Table of Contents

Introduction
The One-Time in One-Time-Programmable
Inside The Package
Sample Preparation
Lab & Safety Equipment
Acid Etching
MCU Firmware Recovery
Mitigations


WARNING - The experiments summarized in this research are very dangerous. They involve the use of toxic, corrosive chemicals and were performed in a tightly controlled environment. Any attempt to recreate these or similar experiments could result in property damage, serious injury or death. Neither I nor Duo Security Inc. is responsible for any such damage, injury or death.


Introduction

The internet-of-things revolution is here and it is here to stay. From internet-enabled cat boxes to Wi-Fi-controlled stoves, smart devices permeate our lives at an ever-increasing increasing pace. The rush to get items like the next greatest internet-connected wine bottle to market coupled with the lack of regulatory oversight frequently puts system security on the back burner; a feature to be “added on later.” Rather than focus on product security, many manufacturers and integrators choose instead to disable hardware debugging functionality and enable firmware readback protection to make vulnerability discovery more challenging.

Once these microcontroller interfaces are locked, there is usually no manufacturer-prescribed way to unlock them without also wiping out the firmware. Historically, there have been ways to bypass these lockouts, often because the manufacturer doesn’t realize how attackers can abuse certain functionality. For instance, manufacturers sometimes unwittingly allow readback of firmware through faulty implementations of the hashing algorithms used to validate flashing at the factory. If a debugging interface is available, researchers can sometimes extract the firmware through side-effect analysis. There are more involved attacks such as voltage or clock glitching using toolkits, like the venerable ChipWhisperer, that inject faults to try to trip up internal subsystem behavior during critical operations.

Another class of attacks, referred to as invasive,” requires physical access to the silicon dies inside of the package while maintaining chip functionality. These are often dismissed as infeasible for the average security researcher due to the perceived difficulty and expense of IC decapsulation. The goal of this guide is to demonstrate that researchers don’t need a multi-million dollar lab to perform practical invasive attacks against a typical microcontroller and to detail a novel method of utilizing safer acid mixtures at or below room temperature to decapsulate semiconductor packages that utilize copper interconnects and wires. At the end, I will cover common mitigations employed and how to spot them.

The One-Time in One-Time-Programmable

At their most basic, we can think of hardware configuration flags that allow access to debug interfaces and firmware readout as wires burned into an open circuit. However, manufacturers implement this in many different ways at the silicon level. While eFuses and antifuses do behave in essentially this way by permanently breaking down some conductive or dielectric layers, these are rare in the MCU world; we typically find them in Complex Programmable Logic Devices (CPLDs) and, most notably, Microsoft Corp.’s Xbox 360’s CPU. More often than not, there is a logically (and often physically) isolated area of common flash memory that holds these control bits without providing an external means of resetting them once set. Flash memory itself is based on a type of transistor discovered in the 1960s called a floating-gate transistor. These transistors are special in that they can trap and hold charge for long periods. The simplest type of array of floating-gate transistors is called Erasable Programmable Read Only Memory, or EPROM for short.

Eprom Source: Wikipedia

By default, these floating-gate transistors are at a logic high. When a logical low needs to be stored, the control gate is energized by a high programming voltage (VPP in most datasheets) and electrons become trapped in the electrically-insulated floating gate directly below through a quantum tunneling process called hot-carrier injection. This charge-implantation process changes the electrical characteristics of the transistor. These characteristics can be measured by applying a lower read voltage to the control gate and sensing if the current passes through the transistor, effectively storing a single bit of information.

Microelectronics Source: Wikipedia

These simple EPROMs have a significant limitation: if the cell needs to be updated and reset to a logic-high state, the charge stored in the floating gate needs to be cleared. The primary method for knocking this latent charge out of the floating gate used high-energy photons from a glorified ultraviolet (UV) flashlight called an EPROM eraser. This is why old EPROM packages have quartz windows topped with adhesive stickers protecting them from accidental erasure due to errant UV sources like the sun.

This inability to electrically reprogram EPROMs led to the development of the EEPROM, or Electrically Erasable PROM, which utilizes an additional quantum tunneling effect (Fowler–Nordheim tunneling) to also electrically drain the charge from the floating gate. Modern flash memory adds a charge pump to create the high programming voltages internal to the circuit and clusters these cells so they can be erased— or “‘flashed”’ off— together.

While all these ancillary functions are new, ultimately modern flash memory is still based on the same floating-gate technology as the EPROM. This means that if I expose the floating gate to enough UV light, I can alter its value.

Inside The Package

Before we get ahead of ourselves, it is important to understand the structure of a typical electronic component package. I’ll be primarily focusing on a standard single-die DIP (dual in-line) package, but the methodology and techniques are nearly identical for other types.

What we commonly think of as pins are not actually directly connected to the silicon die. Instead, the pins form a lattice surrounding the die, often referred to as a lead frame and are typically made of either aluminum or aluminized copper. In the center of this lead frame is a paddle that acts as a mechanical support for the silicon die placed atop it.

DIP Source: Wikipedia

Each lead of this lattice is then ultrasonically welded to the silicon die by a 0.003-inch to 0.020-inch thick bonding wire. Manufacturers historically used gold for these bond wires, but now more frequently use copper due to its lower cost and superior electrical characteristics.

One of the major challenges in performing live target analysis on the cheap is keeping these incredibly fragile bond wires intact throughout the decapsulation process. While it is possible to repair broken bond wires, their scale requires expensive, industrial wire-bonding stations, so decapsulation methods should keep this delicacy in mind and avoid bond wire damage.

Once the bond wires are attached between the lead frame and the silicon die, the entire arrangement is placed into an injection molding machine and surrounded with Epoxy Molding Compound (EMC). This compound is typically only 5 percent to 20 percent actual epoxy resin, as the resin has a high degree of thermal expansion that could break bond wires or cause internal stresses that can fracture the die. The majority (60 percent to 80 percent) of the molding compound is silica or alumina filler to compensate for the thermal expansion. The rest is usually a proprietary blend of rubbers, dyes and plastic softeners. The ultimate goal of the various decapsulation methods, when applied to live-target analysis, is to delicately remove this molding compound above the die without damaging it or the surrounding bond wires.

Sample Preparation

Throughout this process I will have to destroy several chips. Unless the target is relatively inexpensive, heading over to a favorite component peddler like DigiKey or Mouser and ordering a fist full of spares is a wise idea. I’ll use them to take measurements, run experiments and refine the overall decapsulation and firmware extraction process. Chemical wet etching, like what I will be doing later, can be a slow and expensive process in terms of reagents consumed. That’s why it is important to leave only the bare minimum of work for the acids and to remove the majority of the molding compound by other means.

Before I remove any material, I need to know where and how much of it to remove and the easiest way to accomplish that is with a belt grinder and a flatbed scanner. I start by clipping off all the leads from one of the sacrificial chips and begin to sand away material from the side of the package where the leads once were.



I’m interested in simply removing bulk material, so I use a coarse-grit sanding belt to progressively sand the edge down. I try to keep it level, but it doesn’t need to be perfect. As grinding progresses, the lead frame fades away and eventually the edge of the silver-colored die itself sitting on a bit of copper becomes visible. Once I hit this point, I stop sanding, clean it with isopropyl alcohol, throw it on my flatbed scanner and scan it at the highest DPI setting. Once scanned, I’m greeted with an image like this:

Die Wires

Here I can make out all the components of the package. Most importantly, just above the die, I will see a series of small metallic speckles, which are the bond wires coming in from the lead frame and approaching the die. This is my first glimpse of the scale, material composition and fragility of these wires. From this image, I measure:: the width of the die itself, the offset of the die from the “front” and “back” of the package and the depth of the bond wires as they approach the die. The latter will be the target depth of the material removal, minus a small fudge factor.

Now I can finally go on to removing just the right amount of molding compound. The semiconductor failure analysis industry typically uses a laser ablative process for this step, but utilizes expensive high-speed galvos to rapidly scan the beam. While compound removal is possible with a standard plotter-style laser cutter, it is likely to produce a ridged surface that will etch unevenly and damage the chip in the process. Preferably, I would use an inexpensive PCB mill, but thanks to the scale, even a 3D printer with a Dremel strapped to the side will do.

Dremel

In my case I printed a Dremel flex-shaft holder for my Lulzbot TAZ 6, attached an aquarium pump to blow off the removed material and modified some firmware parameters to get homing and leveling working properly. To hold the chip in place during the milling operation, I soldered a DIP socket to some perf-board and stuck it down to the build plate with some painters tape.

Dip Socket

Using the cross section image captured earlier, I created a 3D model of the package. I prefer Fusion 360, but any CAD package will suffice. In the model, I created an elongated well going down to—but not touching—the bond wires. How close really depends on the repeatability of one’s process but I shoot for less than 0.1 millimeter. The closer I can get, the faster the etching will go. I also like to add a pair 0.5 millimeter deep holes on opposing sides of the chip that I can place a depth gauge in to determine how (not) level my build plate is and whether I’m cutting to the depth that I intended. I also add a chamfer to the pocket so that the die can be UV-exposed at an angle if necessary.

3D Model of Package

To machine the cavity, I use inexpensive 1.0 millimeter end mills and generate the tool paths accordingly. I then place the chip into the DIP socket and execute the G-code that controls the milling device. A truly repeatable setup would allow for progressively increasing the cutting depth and testing chip functionality between passes to get as close to the bond wires as possible.



Lab & Safety Equipment

In high concentrations, nitric acid is some pretty scary stuff. Even in moderate concentrations it will release poisonous nitrogen oxide gases (NOx) that will kill you. It is of paramount importance to perform the etching in a well-ventilated area with appropriate personal protection equipment.

Most etching procedures call for the handling of large volumes of extremely expensive and reactive fuming nitric acid and holding it close to its boiling point (83 degrees Celsius). The methodology covered here uses a chemical process to concentrate small volumes of more dilute nitric acid to its fuming counterpart at or below room temperature making the procedure much safer due to the reduced reactivity and fuming at the cost of increased reaction run-time. The following protections should be considered the absolute bare minimum requirements and are unique to the considerations of my lab. Additional safeguards may be necessary and implementing the right ones requires expertise, research and planning.

Fume Hood & Ventilation

Inexpensive acid-safe fume hoods are available from overseas sellers or from surplus suppliers at local universities and on Craigslist. To keep costs down, I built my own. When designing an enclosure, it is important to consider material reactivity with the chemicals involved. I settled on two of the largest polypropylene containers from my local Home Depot. One donated a large portion of its side to become a sliding sash while the other became the hood body. A three-way outdoor power splitter and a 4-inch starting collar were siliconed in place.

Fume Hood

While this is far from ideal fume hood geometry due to its lack of back baffles and front air foils as well as its being being made out of a Sterilite box, it functions perfectly well through the application of brute suction force. Attached to it through a series of HVAC hoses and mating plates is a single horsepower industrial blower motor that can pull 500 cubic feet of air per minute.

HVAC

This is a ridiculous amount of airflow, but it was what I had on hand. I ended up adding a type of variable transformer called a Variac to control its speed. There are less expensive and better-suited blower fans available online with built-in speed controllers. To test the fume hood, I used a high-volume party smoke machine to make sure air was flowing inward by moving it around the entire perimeter of the hood. I then placed the smoke machine nozzle inside the hood and verified the absence of vortices that could direct fumes outside of the hood.

Personal Protective Equipment

While no respirator without an external air supply can filter out NOx fumes, there are plenty that will filter out sulfuric and side-reaction fumes. After familiarizing myself with types of respirators, I purchased a full-face respirator and an appropriate filter cartridge.

Respirator

When dealing with fuming nitric acid, even in tiny quantities, nitrile or latex lab gloves will simply not suffice. Viton is a synthetic rubber that is highly resistant to nitric acid attack and is used extensively in the petrochemical industry for O-rings and gaskets. Luckily, one can also purchase Viton-coated butyl gloves.

Viton Gloves

Of note is that these gloves are sensitive to acetone, which can make them tacky. One should not rely on gloves protecting them, they are there only as a backup. I tend to wear a pair of nitrile gloves under the Viton gloves to make the outer gloves easier to take on and off repeatedly.

A lab coat, meanwhile, isn’t just for costume parties; it’s an invaluable protective layer separating you from a really bad time if something goes wrong. When choosing a lab coat, I look for thick, natural fibers that will take time to soak through. Buttons take time to undo so I try to pick one with snaps that I can remove quickly.

A Plan

The most important thing to have is a plan. I try to ask myself what can go wrong at every step and what I can do to mitigate those risks. If my exhaust fan dies, what am I going to do? If a fire starts do I have an appropriate means to extinguish it? Where will I put acid-contaminated tools and pipettes? If I have an acid spill, how will it be contained and neutralized? Can I get to an eyewash station while blind? If something happens to me, can someone help me without endangering themselves? Have I understood the Safety Data Sheet (SDS) for each of the chemicals I am using and are they on hand?

Acid Etching

Acid Etching

What actually happens during nitric acid wet etch of epoxy molding compound? Nitric acid typically acts in three distinct ways: As a proton donor, a strong oxidizing agent and as a nitrating agent. When it comes into contact with the molding compound, nitric acid oxidizes the resin hardener within, causing the epoxy to break up and release the silica filler. However, dilute nitric acid also reacts with copper to produce copper oxide and copper nitrates which are soluble in water, effectively dissolving the metal. Due to a spike in the price of gold and superior electrical characteristics of copper, the semiconductor industry has been steadily shifting from gold bond wires to copper.

Copper Bond Wire Market Source: JIACO Instruments

To keep delicate copper bond wires from dissolving, the failure analysis industry uses temperature-controlled and highly concentrated (>98 percent) white fuming nitric acid (WFNA), with a small amount of concentrated sulfuric acid added. While the WFNA reacts with the copper metal to produce NOx gases, water, copper oxide, and copper (II) nitrate, the concentrated sulfuric acid acts as a dehydrating agent, keeping the copper (II) nitrate from becoming soluble. It quickly forms a protective crust around the copper metal preventing further acid attack. This is done while holding the reaction mixture at a lowered (10 Celsius) temperature to slow the normally vigorous reaction. Fuming nitric acid can be used by itself to achieve this, but as it reacts, more water is generated, increasing solubility over time and wearing away the protective nitrate coating, further eroding the bond wires. This property of concentrated nitric acid as it reacts to copper is called copper passivation.


Credit: NileRed

Fuming nitric acid is an extremely dangerous chemical to make, transport, store and use. This makes it fairly difficult to obtain and comes at great expense (around$400 delivered via special commercial carrier per half liter) and even so only after chemical supplier authorization which can be a lengthy and cumbersome process. Distilling concentrated nitric acid into its fuming variant is stymied by the formation of what’s called an azeotrope with water— the two become inseparable through fractional distillation at a concentration of 70 percent.

Independently synthesizing fuming nitric acid, while possible, is extremely dangerous as it involves distilling nitrate salts with sulfuric acid. After many experiments and countless failures, what I have developed is a process using concentrated 70-percent nitric acid which is readily available, safer to store and can be shipped by U.S. carriers to residential addresses for about a tenth the price of fuming nitric acid. The ultimate goal of the industrial preparation is to have a low water content relative to the nitric acid to achieve copper passivation by utilizing the concentrated fuming variant and taking care of the water generated in the reaction through concentrated sulfuric acid drying.

By having a large excess of concentrated 98-percent sulfuric acid relative to the dilute 70 percent nitric acid, I can bind the water present in the nitric acid and reduce solubility of the copper (II) nitrate layer protecting the bond wires. This allows me to oxidize the molding compound away, while leaving the chip functional in its original package. As an added benefit, the amount of sulfuric acid added can moderate the reaction rate sufficiently to achieve copper passivation and effective molding compound etching at room-temperature, requiring no external heating or cooling albeit at the cost of increased reaction run-time. The target mixture that I have found to work best for samples is a 2:1 ratio (by volume) of 98 percent sulfuric acid to 70 percent nitric acid. I typically prepare only 3 milliliters to 6 milliliters of the mixed solution at a time to keep reagent waste low, preparing more as I run out or as all the nitric acid decomposes out of solution.

Lab Equipment

Materials

In addition to the concentrated nitric and sulfuric acid, to perform this procedure I was going to need some basic lab equipment and chemicals. On the glassware side, I needed a variety of Pyrex beakers ranging from 50 milliliters to 250ml, a set of glass Petri dishes, a 5-milliliter graduated cylinder, a watch glass to fit over the beakers, a glass stir-rod and a Pyrex baking dish to catch any spills. Additionally, I needed a pair of squeezable wash bottles, some pH test strips, a bag of disposable 3-milliliter transfer pipettes, a set of tweezers and a stock of canned air. As far as additional chemicals go, all I needed was some hardware-store grade acetone, a jug of distilled water and common sodium bicarbonate (baking soda) from the grocery store.

Chemicals

Prep

Outside of the fume hood, I fill one wash bottle with acetone and the other with distilled water. In a large beaker, I dissolve baking soda in tap water until the solution is fully saturated and the soda begins to accumulate on the bottom of the beaker. I add more until I have about a quarter-inch of baking soda collected on the bottom to act as a buffer. This will be my neutralizing solution that I can use to neutralize both acids.

Neutralizing Solution

I lay out my fume hood workspace and mark my beakers for what they are to contain.

After putting on my safety equipment, I turn on the fume extractor and retrieve my acid bottles from storage. Using a transfer pipette, I measure out 2 milliliters of nitric acid in a graduated cylinder, pour it into an empty 50-milliliter beaker and cover with a watch glass.



Using a fresh transfer pipette I measure out 4 milliliters of sulfuric acid in a graduated cylinder. Note that concentrated sulfuric acid is much more viscous than the nitric acid. I add this slowly to the beaker with the nitric acid 1 to 2 milliliters at a time. When I add sulfuric acid to the nitric acid, the sulfuric acid will react with the water in the nitric acid in an exothermic manner. It is important to add the sulfuric acid to the nitric acid and not the inverse as there is the risk of flash boiling occurring. I keep an eye on the solution and when I see steam I stop and wait for the solution to cool down. I repeat this process until all the sulfuric acid is mixed in to the nitric acid forming the etchant, or etching solution. Visible wisps of acid fumes will be present and I check on the solution’s temperature with an infrared thermometer.



After covering the etchant with a watch glass, I let it sit until it returns to room temperature, about 10 to 20 minutes.

Etching

Finally, I can begin the actual etching process. After placing one or more of my samples in to a Petri dish, I draw a small amount of the prepared etchant in to a transfer pipette and place 4 to 5 drops of it into each milled cavity to fill it. I cover the samples with the other half of the Petri dish and observe.



The reaction is slow to start but the acid should begin to discolor. Eventually, small bubbles of gas will begin forming inside the cavity. If they don’t, I try to adjust the acid ratio slightly to suit the unique characteristics of the particular molding compound or lightly warming the cavity with a hot-air gun.



After 20 to 30 minutes, new bubbles should no longer be forming. Using tweezers, I pick up the package and carefully tap out the acid in to a waste acid beaker, return it to the Petri dish and refill the cavity with fresh etchant. Bubbles will likely be much less vigorous on subsequent etching passes.



After performing this process two or three times—or until the acid appears to have a diminished effect—I fill a 50 milliliter beaker with approximately 15 milliliters of acetone, enough to fully submerge the sample. Using a pair of tweezers, I submerge the entire chip in the acetone and waft it around gently. The aim here is to wash the insoluble silica filler and disintegrated epoxy out of the cavity to expose fresh molding compound to the etchant. It is important to place the sample into the acetone and not acetone into the sample as the acid can react exothermically and build up heat invigorating the reaction and potentially destroying the bond wires.

Silica grit will start to accumulate on the bottom of the acetone beaker. If the bond wires are not yet visible when I lift the sample out of the acetone, I use a squeeze bottle of acetone to gently wash the cavity further. I dump the contents into a larger, uncovered waste acetone beaker and rinse the residue out with fresh acetone.



Before introducing fresh acid, it’s critical that the chip be fully dry of acetone residue. Using a can of compressed air, I gently blow out the cavity until the surface appears dry.



I return the sample to the Petri dish and repeat the process from the beginning with multiple rounds of etching and acetone washing. The number of times I have to repeat this process is dependent on how close I was able to get to the bond wires. If the process stalls and bubbles stop forming or are extremely fine, I use a pipette and blow some air at my acid beaker, if I don’t see any white nitric fume wisps then the majority of my nitric has decomposed and I need to mix a new batch of acid. Eventually, the bond wires will begin to emerge and, after a few more etching passes, the die itself will be revealed.



Once the die is sufficiently exposed, it is time to wash the sample and remove any remaining acid residue. I gently, but thoroughly, rinse the sample with acetone before dunking and wafting the sample in a beaker of distilled water. I decant the water into an acid waste beaker, refill with fresh distilled water and let it sit for 5 minutes while the acid residue is displaced. Periodically, I test the pH of the water wash and repeat the decanting and washing steps until the pH is of the solution is a neutral seven and remains stable for half an hour. Finally, I remove the sample and set it out to dry.

Cleanup

With the package is decapped and the die exposed, it is time to clean up and dispose of all the waste. I begin slowly adding the unreacted etchant into the neutralizing solution. When the two come in to contact, a large amount of gas is generated that will foam up, so I try to go slowly. Once no more gas is being generated, I test the pH and make sure it is still at seven. I repeat this process for my acid waste and wash the pipettes and Petri dish in the solution. After letting the waste acetone evaporate naturally under the fume hood until only a thick sludge is left, I neutralize it as well. I wash all equipment with hot, soapy water and leave to dry. Finally, I wipe down interior work surfaces with a paper towel wetted with the saturated neutralizer.

MCU Firmware Recovery

With the unlocked sample decapped and working, I can start to characterize the internal storage to determine which portion of the die to mask off. Here I will be focusing on the familiar, one-time-programmable, PIC16C54A. Looking at the die under a microscope, I can easily spot the floating-gate transistor array by its relatively large uniform texture surrounded by row and column drivers:

Array

To characterize EPROM storage, I will be programming the device to see if I can alter its contents by exposing the die to UV light. To program and read the contents of this microcontroller, I use a multi-programmer like the MiniPro TL866.

Multi-Programmer

The default state for floating-gate-transistors is at a logic high, so reading the contents of the 12-bit word chip shows pages of 0FFF:

12-Bit

Flip every bit in the chip by filling the buffer with zeros and program it:

Bit Flip

To expose the die to UV light and knock the trapped charge out of the floating gates, I use a vintage AT&T ES-140T EPROM eraser, but any high-output UV source will do. I insert the package into the eraser and expose it for five minutes.

Eraser

Sticking the exposed sample back in the reader reveals that most of the bits have been reset with a few stragglers remaining. Rereading the chip l also likely shows that some of the bits have flipped again. These are floating gates hovering at the discernable threshold of charge.

Floating Gates

Exposing the die for an additional 10 minutes fully resets the chip to a virgin, unprogrammed state. I flip over to the chip-configuration section, which includes various hardware flags and timer settings such as the Code Protect (CP) flag. I toggle them all on and note the value of the Config Word change. Since the default state is logic high, setting these flags actually clears bits.

Flags

After programming these flags and reading back the contents, I find that the output has changed and is returning a (simplistic) hash of its contents. To complete the test, I place the die back in the EPROM eraser and expose it for the same 15 minutes I used to erase the program contents the first time.

Reading back the contents once again, I find ...that nothing has happened! All of the configuration bits are still set and the contents are still hashed. Why is this? Either the floating-gate transistors are obstructed by something and less UV light is reaching the floating gates, a different fusible technology is being used, I’ve just hit an actual invasive attack mitigation, or a combination of all three.

This is actually a good thing, as it confirms that the configuration flags are stored in a separate structure of the chip. If the configuration flags were reset in the same time period as the floating gates, it might indicate that they are collocated with the main storage area, making manual masking extremely difficult if not impossible.

I test the first hypothesis by further exposing the chip for an additional half hour. To my great relief, all of the configuration bits are reset including the CP. This implies that these transistors are imperfectly shielded by a countermeasure; some UV light is still bleeding into the floating gate, but at a reduced rate. Ican further test by pulling a trick out of bunnie’s sleeve and try to shine more light under the shielding by holding the chip at an angle relative to the light source, a method made possible initially milling out a chamfered pocket in the package and mounting it at an angle in the EPROM eraser. After exposing the package for only 15 minutes I verify that the shielded configuration flags have been reset.

Shield Array

With decapsulation and UV-erasure characterization done, the only thing left to do is to shield the main floating-gate array from the UV light by manually masking off the area of the die with a non-conductive, UV-opaque material. The easiest and most forgiving of which is simple nail polish.

Using a steadied hand and a needle tip covered in a bit of nail polish, I carefully cover the gate array under a microscope.

Nail Polish on Gate Array

I don’t worry too much about making a sloppy mistake. As long as I don’t nick the bond wires I can soak the entire package in acetone to dissolve the nail polish and try again. I verify that I can apply a sufficiently well-defined mask by programming the microcontroller with my test pattern and resetting the configuration bits. If so inclined, I could also use this to narrow down the location of the relevant configuration fuses by selectively masking off more and more portions of the die. After this, I am ready to apply what I have learned with these tests to a locked chip and extract its firmware.

Mitigations

While this approach works for many microcontrollers, more security-focused devices such as SIM cards, Trusted Platform Modules (TPMs), hardware security modules ( HSMs) and other applications where system integrity is paramount, employ hardened chipsets that mitigate these and other invasive attacks. Let’s take a look at one such device, the Yubico YK4 Nano.

Yubico Nano

Decapsulating the YK4 and sticking it under an optical microscope reveals, well, not a whole lot. Pretty much every active part of the die apart from the bond pads is shielded by a metal layer that prevents the kind of visual inspection and simple UV tampering I described earlier.

Metal Layer

One would have to de-process the device further using more exotic (and dangerous) acids or by wet lapping to reveal the true structure hidden underneath. These are destructive processes that render the circuit non-functional.

Acid Etch Die

That’s not to say these types of defenses can’t be bypassed, but getting around such protections usually requires significant investment. Other types of mitigations focus on the detection of decapsulation by placing light sensors on the die itself to detect when the chips powers up after the molding compound has been removed. SIM cards often include an active mesh layer that can detect when a single trace has been cut.

If you found this interesting and want to learn more about IC security, individual feature identification and decapsulation methods, there is no better free resource than the siliconpr0n wiki. If you would like to learn more, Texplained offers a very technical training course on everything I covered here as well as on reverse engineering the implemented logic and extracting encrypted ROMs by hijacking internal circuitry.

]]>
<![CDATA[Energy & Critical Infrastructure Alert: Industrial Control System Data Stolen]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/energy-and-critical-infrastructure-alert-industrial-control-system-data-stolen https://duo.com/blog/energy-and-critical-infrastructure-alert-industrial-control-system-data-stolen Industry News Thu, 22 Mar 2018 08:30:00 -0400

The latest technical alert from the Department of Homeland Security (DHS) and Federal Bureau of Investigation (FBI) warns the energy and critical infrastructure sectors about a multi-stage intrusion campaign, reportedly said to be conducted by Russian government threat actors.

The malware, spear phishing and remote access attacks affect U.S. government entities and organizations in the energy, nuclear, commercial facilities, water, aviation, and critical manufacturing sectors.

After the threat actors obtained remote access to energy sector networks, they moved laterally to collect information about Industrial Control Systems (ICS), the computer systems used to operate critical infrastructure.

Much of the alert's description of the phishing attempts are the same as those I wrote about in New Office of Cybersecurity Proposed in Response to Attacks on U.S. Energy & Critical Infrastructure, based on information from Symantec’s research report.

How Attackers Gained Remote Access to Energy Networks

Compromised credentials were used to access networks where multi-factor authentication wasn't implemented. They also used scripts to create local admin accounts (disguised as backup accounts). Then they disabled the host-based firewall and opened up a port for RDP (Remote Desktop Protocol) access.

In addition to disabling perimeter-based controls once they gained access to networks with stolen passwords, the threat actors used virtual private network (VPN) software, like the free version of FortiClient to connect to target networks. They also used open-sourced, free brute-force password-cracking tools to harvest even more credentials.

And, they manipulated Windows files to redirect user paths to their own remote server, leveraging the Server Message Block (SMB) authentication process to steal users' credentials.

The threat actors targeted workstations, servers and corporate networks with data output from control systems within energy generation facilities - they accessed ICS and supervisory control and data acquisition (SCADA) system files, and copied Virtual Network Connection (VNC) profile and configuration info on how to access ICS systems on the network, according to the alert.

The Need for a Zero-Trust Security Model

The combination of stolen user credentials (identity) and easy bypassing or disabling of perimeter-based controls allowed these attackers to gain access (and maintain persistent access) into energy organizations' networks.

The alert contains a lengthy list of detection, prevention and mitigation strategies to take - including tips on log monitoring and what to look out for; which TCP ports to block; specifics around deploying web and email filters, and more.

But shifting your organization's focus on policies and controls from network and IP-based to user identity and device health can also help. With the perimeter expanding outward to include identity, securing remote access to organizations' networks becomes more important than ever.

Ensuring a zero-trust environment means to assume that no traffic within an enterprise's network is any more trustworthy than traffic coming from outside the network. Insider risks, vulnerable endpoints, policy gaps and other potential threats require this type of zero-trust security model.

Download Moving Beyond the Perimeter: The Theory Behind Google’s BeyondCorp Security Model to learn more about the philosophy of the new framework.

And read Moving Beyond the Perimeter: How to Implement the BeyondCorp Security Model to learn how Duo Beyond can help you:

In addition to gaining visibility and control over endpoints accessing your networks, you should also deploy technology to provide additional checks to verify your users’ identities (multi-factor authentication). The combination of both healthy endpoints and authenticated users can help prevent potential compromises and data leaks.

]]>
<![CDATA[Behind the Scenes: The Making of the Decipher Teaser Video]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/behind-the-scenes-the-making-of-the-decipher-teaser-video https://duo.com/blog/behind-the-scenes-the-making-of-the-decipher-teaser-video Industry News Wed, 21 Mar 2018 08:30:00 -0400

I’m a writer - first and foremost. And I typically write tech and security industry papers, ebooks, blog posts, web copy, etc. for my day job - not live-action video narratives or screenplays.

But the lines have started to blur more and more in the past few months, when I came up with a concept for a high-style video that encapsulated not only my personal aesthetic, but also many aspects of the hacker/maker/infosec culture.

The Initial Concept

The idea was to spin a loose narrative focused on a dark, futuristic and apocalyptic mood. I put together a pitch for a grungy, street art-filled, back-alley, black-clad maker film that was not only diverse, but more representative of a realistic, culturally-relevant future.

A Closer Look

Diversity makes its way into a closer look at the concept beyond just high-style aesthetics. I also wanted to depict a variety of different types of makers, hackers, tinkerers, etc.

Diverse Makers

That’s why I included scenes of not only a traditional, hardware-focused maker space, but also the classic software hacker and a welder. The business-focused CISO also makes an appearance; working late at night in her office.

Aspects of counterculture, street art and free-running urban silhouettes are all scenes that I’ve been close to at some point in my life, and they represent free-thinking, authentic individuality and expressionism.

 

These individuals might seem like they’re loners - but they’re not - they’re all part of a community that has something in common - curiosity. A curiosity that drives them to do what they do, and to question the status quo.

Monolith

The end monolith scene brings each decloaked individual together, drawing them toward the light - similar to how Decipher, a new infosec media site, seeks to inform the public, demystifying cybersecurity and avoiding the FUD (fear, uncertainty and doubt) that is often perpetuated throughout mainstream media’s coverage of security news.

FUD is a common route to take if you’re intent on misleading or leveraging the lack of experience or knowledge of the security industry in exchange for wider appeal, more clicks, higher engagement, etc. But it’s not a particularly great, or honest, way to educate the public. So what is Decipher? This statement from the editors explains:

“Decipher is an independent editorial site that takes a practical approach to covering information security. Decipher stories analyze the news, explore the impact of the latest risks, and provide informative and educational material for readers intent on understanding how security affects our world. Decipher deconstructs security through various forms, including features and shorter articles, podcasts and video, as well as graphics and interactive content.

Decipher amplifies the voices of those who look at security through the prism of how it affects victims, and seeks out trusted, pragmatic, voices that focus on security impact over hype. It isn’t about the coolest exploit, the scariest vulnerability, or the largest breach. Decipher provides context, information, and analysis, not to point fingers or lay blame. We are here to inform and educate, not to speculate.”

The Pitch

I put together my first-ever video concept pitch (it was pretty rough, as you’d imagine - see one slide example below) to show to our Creative Director Pete Baker, Video/Motion Graphics Producer Martin Thoburn and Senior Multimedia Producer Rik Cordero.

Pitch 1 Decipher Teaser Pitch 1.0

They matched my enthusiasm for the concept, and Rik bumped it up to another unprecedented level as he and Pete worked in an ending I never could have imagined, especially from a production point of view.

Pitch 2.0 (way more awesome than my original one) included a short video edit with cuts from high-fashion, countercultural visual cues found mainly in Y3 commercials, as well as music videos previously directed by Rik and starring Mr. Robot’s Joey Badass. Oh, and a reveal of Dennis Fisher, Decipher Editor in Chief, at the end.

Pitch 2 Decipher Teaser Pitch 2.0

It opened and ended with a glitchy Decipher logo animation and felt like something completely different from what we were used to producing in our day jobs - less of an explainer video, and more intended to produce visual intrigue in its viewers.

The Final Concept

We did not end up casting Joey Badass. However, with a few different contributors and voices involved, the concept of the video shifted to include a more literal plotline - a diverse cast of characters receive a text message calling them from their dark corners of the world, drawing them together, encouraging them to decloak their dark anonymity to meet at a monolith - all symbolic of Decipher’s intent to draw together a traditionally disparate crowd online to find community and insight on the media site.

Here’s the finished product:

 

Watch the following behind-the-scenes cut of the Decipher video to see how it all came together:

 


The Team

In my initial pitch, I envisioned this video as a key way to draw the entire team’s core capabilities to the forefront in a fully-integrated and fully-realized artistic expression. And they completely exceeded my expectations, per usual.

Rik and Martin attacked the concept with fervor to make it a reality, calling in a third-party production team ran by Producer Garrett Sammons of Nice Shirt Media.

Nice Shirt Media

They wrangled a lot of the on-set logistics and casting of eight diverse actors that would play the team of hackers, makers, CISO, etc. - as well as the casting of two talented parkour athletes.

Decipher Cast

This was key to match my original vision of people of all backgrounds - different ages, genders, races and personas - playing the roles of vastly different movers, shakers and producers in the infosec world, and the production company helped make that a reality.

Senior Digital Designer Sarah Sawtell took on the role of art director, wardrobe stylist/seamstress; outfitting the cast in black cloaks to represent the stereotypical anonymity ascribed to the hacker persona. She also created amazing set designs for each of the scenes.

Sarah on Set

She translated Decipher’s brand, colors and concept into the cast’s wardrobe reveal at the conclusion of the video; emphasizing their individuality and the importance of their different personas - shedding the black hat hoods, quite literally. We also worked with Hair Stylist Stein Van Bael and Makeup Artist Miles Marie to match our original aesthetic.

Wardrobe and Stylists

Sarah S.’s team of art assistants, including Brand Strategist Chrysta Cherrie and Interactive Designer Tracy Toepfer helped immensely throughout the chaotic first day of shooting.

Tracy also played the role of the CISO (Chief Information Security Officer), a key player when it comes to influencing security strategy and major security decisions at major companies, which can, in turn, affect the privacy of personal data of millions of customers.

Tracy on Set

Both Design Manager Steven Samuels and Tracy also designed and helped produce the Decipher posters, stickers and stencil that were wheat-pasted, spray-stencilled and stuck on walls around Detroit.

Decipher Stencil & Stickers

Rik tagged me in to act as the assistant director, which I tried to the best of my ability to fulfill. I put my past web development project management skills to use, creating a master schedule based off of Rik’s shotlist that accounted for each team, staggering shoot times with hair, makeup and wardrobe, and scheduling times for art production and set production.

Associate Web Developer Sarah Ovresat made the scenes come to life in her beautiful illustrations of Rik’s storyboards using a Wacom tablet - without ever setting foot on any of the locations, she was able to bring a few photographs to life with her sketches of various hackers throughout their scenes to the benefit of the camera, lighting and directing teams as they moved quickly through the shots.

Decipher Storyboard Illustrations

As tech director, Martin worked closely with our set designer/producers, our art director and our Motion Graphics Artist Hafsah Mijinyawa to create the monolith scene at the end, providing tech support to trigger the lighting sequence perfectly, aligning with the actors’ cues to provide a dramatic finale.

Hafsah & Martin

According to Martin, the entire video system was controlled by a PC hooked up inside, equipped with a high-end graphics card that could output five video signals at a time.

Monolith Tech

One screen was a control monitor and the other four screens were mirrored to the eight different monitors affixed to the sides of the monolith. An iPad running TouchOSC was used to trigger video clips on the PC that was running Resolume Avenue Media server software.

Monolith Screens

Hafsah’s custom motion graphic animations were designed and tested across every type of device and platform to fit our diverse cast and their personas, and they made the monolith scene truly come alive.

Monolith Build

Hafsah also created animations for the digital billboard hacking scene in Detroit, and glyph animations that were “delivered” to each actor’s phone to alert them of the Decipher message.

Decipher Glyph

Our Art Director, Sarah S., enlisted the help of her dad to model and create the physical monolith and podiums, seen in the prototype below:

Monolith Prototype

Finally, Rik expertly directed the scenes, working tirelessly (constantly on his feet throughout a 16 hour shoot) closely with the lighting, production and camera teams to match the overall aesthetic vision.

Rik on Set

As the concept developed, I knew it would fit not only my personal aesthetic, but also that of Rik’s - I fully trusted he understood and would execute on the original conceptual vision.

The Locations

We shot over a few days in Detroit (location scouting by Rik, Sarah S. and I) and in several manufacturing warehouses located in Ann Arbor (thanks to Chris Oz, our location manager, who also hand-rigged the custom monolith lighting that hung from a tall warehouse ceiling).

Chris on Set

The interior scenes came to life with the expert lighting work done by the production crew and set design/props by Chris and Sarah S., turning the cavernous industrial manufacturing warehouses into eery and dreamy nightscapes.

Decipher Location, Sets & Lights

In Detroit, we shot mainly around the Eastern Market area, downtown near Campus Martius and the GM building, and in an abandoned warehouse structure - at night, in the freezing cold. During our scout a few days prior, we stumbled upon a mural comprised of Decipher brand colors.

Detroit Decipher

Detroit had an ideal aesthetic for certain scenes in this video for a few reasons - it’s an emerging city with a varied past; it’s come so far and continues to evolve toward a very different future, with an economy bolstered by tech. It’s both honest, humble and exciting in its possibilities.

Detroit Freerun

It’s a city full of art, history and architecture, lending not only the perfect backdrop but also insisting on adding interesting, textured scenes; unscripted - from the hot smoke billowing from manhole covers to winding dark alleys lined with brick and fire escapes to buildings covered in murals.

 


Roll the Credits

Huge thanks to everyone that worked on this video:

Talent

Charles Poole, Male Maker 4, Solderer
Mahpara Kahn, Female Maker 1
Santi Nguyen, Male Maker 1, Hacker
Tracy Toepfer, Female Maker 4, CISO
Remy Lewbel, Male Maker 2
Kendall Hall, Female Maker 2, Welder
Anthony Ballios, Male Maker 3
Alison DuBois, Female Maker 3
Paige Martin, Female Freerunner
Vanya Prokopovich, Male Freerunner

Crew

Decipher Video Crew

Rik Cordero, Director
Garrett Sammons, Producer, Nice Shirt Media
Cy Abdelnour, Director of Photography
Kai Dowridge, 1st Assistant Camera
Francis Jeup, 2nd Assistant Camera
Matt Wilken, Production Assistant / Gaffer
Sarah Sawtell, Art Director
Thu T. Pham, Assistant Director
Chrysta Cherrie, Art Assistant
Tracy Toepfer, Art Assistant
Martin Thoburn, Technical Director
Hafsah Mijinyawa, Motion Graphics Artist
Chris Ozminkski, Location Manager / Set Designer
Miles Marie, Makeup Artist / Photo Assistant
Stein Van Bael, Hair Stylist
Justin Erion, Set Photographer
Priscilla Creswell, Production Assistant
Reznor Angel, Production Assistant
Pete Baker, Creative Director / On-Set Photographer
Ben Armes, Associate Video Producer
Sarah Ovresat, Storyboard Illustrator

]]>
<![CDATA[Spotting Misconfigurations With CloudMapper]]> spiper@duosecurity.com(Scott Piper) https://duo.com/blog/spotting-misconfigurations-with-cloudmapper https://duo.com/blog/spotting-misconfigurations-with-cloudmapper Engineering Tue, 13 Mar 2018 11:30:00 -0400

In mid-February, we open-sourced CloudMapper for visualizing AWS environments and it has proven to not only be useful for us at Duo, but also across the broader community. In a matter of days, it had over a thousand stars on Github and a dozen outside contributors sending in pull requests.

Special thanks to the following for their contributions:

In our announcement post, we mentioned a number of ways this tool could be used. The initial post also showed a demo environment with a good architecture from what can be seen with CloudMapper, shown again here:

Good Demo Environment Initial demo configuration showing a well-architected network

I'll now make some modifications to this architecture to show potential misconfigurations you should watch out for that can be seen visually with this tool.

Misconfiguration 1: Unnecessary Service Exposure

In this example, services which are not user-facing applications have been made public despite an architecture that suggests an intention for a different model such as using a bastion host. You can see that the "internal" web servers and databases can all be reached from 0.0.0.0/0, which means the public internet or anywhere.

This means the bastion host really isn't providing much value here, because you can just connect directly to any of the systems. This type of setup can be okay if strong authorization protects access to the resources, but where this doesn't exist, often in front of databases, it can prove dangerous.

Demo - All Exposed Environment where everything is public.

In order to simplify the visualization, external IP addresses such as 0.0.0.0/0 hide what some of the internal connectivity looks like. In reality, everything can talk to everything, so a more accurate representation of the graph would be:

Modified - Demo, All Exposed Modified visualization to show a more accurate view of what it looks like when everything is public.

However, if we applied that same accuracy of representation to the initial graph, we would have:

EC2 - Demo, All Exposed Initial demo configuration with the display modified to show that all EC2s can connect to the ELBs which are public.

As you can see above, all of the EC2 instances can connect back to the ELBs. Although this is correct, it makes the graph appear needlessly complex.

Misconfiguration 2: Soft Center

In this configuration, I made a default security group that allows access from that same security group, and then applied this to all resources. The result is that only a few resources are public, which is good, but everything inside the network can talk to everything else.

Demo - Soft Center All resources can communicate with each other

This network configuration can be bad because if an attacker gets inside the network they may be able to more easily move laterally to any other system. This rat's nest looking diagram can usually be spotted before the visualization is even generated because the "prepare" step of CloudMapper will show "n" nodes and roughly "2(n2)" connections.

Misconfiguration 3: Bad Failover

In this environment, an effort was made to have availability zone failover, but part of the architecture will not be resilient. Multiple ELBs and RDS instances were set up, one in each AZ, but the EC2 running the web server only exists in one AZ.

This isn't always bad, depending on the responsiveness needed in a failover situation, as you could have processes to spin up an EC2 in the another AZ, as is often the case with bastion hosts like the one in this and the other examples. However, an "unbalanced" architecture that straddles multiple AZs or regions can sometimes be more easily spotted visually.

Demo - Bad Failover Architecture that will not be resilient to AZ failover

Misconfiguration 4: Typo in Security Group

This next example looks almost identical to the original demo architecture, except instead of an external CIDR being labeled "SF Office", it has been labeled "1.1.1.1/2". The reason is that although the known CIDR for the SF Office was configured as "1.1.1.1/32", the Security Group has a typo that accidentally allows in anything in the whole "/2".

The result is that instead of 1 IP being granted access, roughly one billion IP addresses have been granted access.

Demo - Typo Security Group is accidentally open to a /2 instead of a /32

Stay in Touch!

We hope this has been helpful to you in finding more ways CloudMapper can be useful to you in understanding your own AWS environments.

If you’re interested in the intersection between security and running a highly-available service on AWS, please contact Duo's Production Engineering team at prodeng-ext@duo.com.

]]>
<![CDATA[Reversing Objective-C Binaries With the REobjc Module for IDA Pro]]> tmanning@duo.com(Todd Manning) https://duo.com/blog/reversing-objective-c-binaries-with-the-reobjc-module-for-ida-pro https://duo.com/blog/reversing-objective-c-binaries-with-the-reobjc-module-for-ida-pro Duo Labs Fri, 09 Mar 2018 08:30:00 -0500

Recently I took a look at a product that manages Apple Inc.’s macOS and iOS devices in an enterprise environment. As part of this work, I performed an analysis of Objective-C binaries running on managed macOS endpoints. I used Hex-Rays’ Interactive Disassembler (IDA) Pro to perform disassembly and decompilation of these binaries.

If you’ve never programmed on macOS or iOS, you might be unfamiliar with the Objective-C language. It’s a variant of the C programming language. Programs developed in this language are linked against the Objective-C runtime shared library. This library implements the entire object model supporting Objective-C.

One of the goals of the Objective-C runtime is to be as dynamic as possible. One feature of this design goal affects function calls being performed on objects. The Objective-C nomenclature refers to these function calls as message passing. Objective-C objects receive these messages, which typically results in one of the object’s methods being called. The runtime dynamically resolves method calls at runtime. Objective-C source code method calls are converted by the compiler into calls to the runtime function objc_msgSend().

Here we’ll take a closer look at an IDA Pro module, REobjc, that adds proper cross references from calls to objc_msgSend() to the actual function being called.

IDA Pro Cross References and Objective-C

Objective-C calls from one method to another are compiled as calls to objc_msgSend(). One effect of this is that IDA Pro cross references do not reflect the actual functions being called at runtime. This function is defined with the following function signature:

id objc_msgSend(id self, SEL op, ...)

This implies that for any Objective-C method call you make, the first two arguments are the object’s self pointer, and the selector, which is a string representation of the method being called on self. Objective-C methods that take arguments pass those arguments in order after the selector.

Compiling Objective-C Programs

To better demonstrate how Objective-C source is compiled and assembled, the following code example introduces source code using common Objective-C patterns. This init method includes four Objective-C method calls.

Objective-C Method Calls

Conceptually, the compiler takes the Objective-C method calls above and compiles them into C code that resembles the following. This example is actually decompiler output from IDA Pro, but it illustrates how Objective-C calls are converted into C by the compiler. Each of the four Objective-C calls above corresponds to the function calls indicated in the following excerpt.

Function Calls

As shown, [super init] call is translated into a call to objc_msgSendSuper2(). This is a common pattern used to initialize subclasses. The call to [NSString string] is translated to an objc_msgSend() call which is sent to the object representing the NSString class. The call to [NSMutableData dataWithLength: _length] is translated into a call to objc_msgSend(), in an example of a class method call with additional parameters.

The last Objective-C call in the example, [[BTGattLookup alloc] init], shows a common object allocate-then-initialize pattern. This shows the alloc message being sent to the BTGattLookup class, which results in an instance of that class. This instance is then the self used in the call to a second objc_msgSend() call to the init method. Objective-C and the Intel X64 Architecture

In the resulting binary on Intel X64 architecture, the calls work according the Intel X64 ABI. Function arguments are passed in registers in the order RDI, RSI, RDX, RCX, R8, R9. This implies the RDI register holds the self pointer, and the RSI register holds the selector pointer. Arguments to the Objective-C method begin in the RDX register if necessary.

Arguments Objective-C Method

To properly add cross references from one Objective-C function to another, the values in the RDI and RSI registers must be known. Discovering the values in these two registers is typically straightforward for most calls to objc_msgSend().

Other aspects of Objective-C analysis to keep in mind are the different ways compilers might decide to generate function calls. On X64, the compiler typically generates function calls using CALL and JMP instructions.

It’s possible conditional jump instructions or direct assignment to the instruction pointer are used. The current heuristics in the module don’t address those cases. During development I did not observe binaries that used conditional branches to call Objective-C runtime functions.

The compiler can also encode the function calls as indirect calls or direct calls. In the case of an indirect call, the instruction argument is a register. In the case of a direct call, the instruction argument is some reference to a location in memory. In either case, we must be able to determine if the CALL or JMP references objc_msgSend().

Additionally, to properly track function call cross references, the analysis must track the return values of functions as they are called. In X64, the return value from a function call is stored in the RAX register. If the Objective-C source code first allocates an object and then performs method calls on the resulting object instance, tracking the type of object pointer stored in RAX is necessary to properly understand what object is being passed in calls to objc_msgSend().

REobjc Idapython Module

The primary purpose of the REobjc idapython module is the creation of cross references between Objective-C methods. The module is intended to be easy to use. To use REobjc, open the IDA Pro command window and execute the following lines of Python:

idaapi.require("reobjc")
r = reobjc.REobjc(autorun=True)

REobjc Under the Hood

My intent with the REobjc module is for it to work as simply as possible. However, it might be useful to explain the module’s functionality. This will hopefully help people see how the code works and suggest ways it can work better or be more accurate. Pull requests and discussion are greatly appreciated on this module.

To locate the Objective-C runtime calls we care about, it’s important to understand there are multiple ways compilers may encode the calls in a binary. As we mentioned, calls to functions can either be direct or indirect, and there are a couple of ways the target of the call instruction might be encoded. The Objective-C runtime is linked into all Objective-C programs, and for this reason, all calls eventually land in the imported libobjc.dylib library.

Typically, programs will contain a stub function that merely performs an unconditional jump into the objc_msgSend() function. This allows the library to be loaded at any address, with the loader then performing the proper fixup to let the target program call into the library properly.

In the REobjc module, this is handled by making sure all instances of calls to objc_msgSend() are properly identified. Sometimes the target of the call will be _objc_msgSend, sometimes it will be an imported pointer of the form __imp__objc_msgSend. Because calls may be encoded using either form of these targets, the module locates all forms in the current database.

Hopefully this approach is flexible enough to work with any binary you find. The module retrieves a list of all names in the IDA database using the API idautils.Names() then matches the target functions via a regular expression, storing the matches in an array. During analysis, every candidate call or jump instruction is compared against the list of Objective-C runtime functions, and those that are found to call any form of objc_msgSend() are candidates for having an added cross reference.

The module iterates over all the functions in the target binary, and for each function, iterates over all the instructions in that function. When a target CALL or JMP instruction is identified, the module determines the target of the instruction. If the target is a register, the module walks backward from the CALL or JMP to determine the value of the register. Direct calls are simpler, in that the target of the CALL or JMP is immediately known. In either case, if the target is objc_msgSend(), the CALL or JMP is a candidate for adding a cross reference.

When a call to objc_msgSend() is identified, the first two function arguments must be identified. To reiterate, the first argument is a pointer to the object that receives the Objective-C message, which is called self. The second argument is a pointer to the selector, or message, being passed to the object. Resolving register values is handled in the module by the resolve_register_backwalk_ea() method.

This function is useful even outside of Objective-C reverse engineering. This function takes a program location and a string representation of a register name. Starting at the given program location, the function will go backward one instruction at a time, looking for the value assigned to the target function. The function does this by checking for common X64 instruction mnemonics MOV and LEA. Some programs will copy values to and from registers using variables, and the code tracks these kinds of assignments until the value being copied into the target register is known.

As we mentioned, there are two registers that are handled in a special way here. The RAX register will contain the return value from a previous CALL instruction, so REobjc.resolve_register_backwalk_ea() will track RAX by considering CALLs. Also, because the RDI register is used as the self pointer in Objective-C, there are cases where RDI is not explicitly set inside a function. This is because the target function being examined is calling methods on its own self pointer. For this reason, when the function walks backward, if RDI is the target register and it’s not explicitly set, the code will perform a lookup to determine the self pointer from the current class.

Once the self and selector pointers are resolved for the RDI and RSI registers, the module will attempt to create a cross reference to the appropriate method. This is done by leveraging the existing Objective-C support in IDA Pro. The module function REobjc.objc_msgsend_xref handles the creation of cross references. The function takes the program location of a CALL, and the program location where RDI and RSI are set, and attempts to add the appropriate cross reference.

It’s important to understand that cross references are only added when the Objective-C method being called is located in the current binary. As I work more in this area, I will consider how best to handle method calls that are located in an imported library.

REobjc: The Future

As with any code, REobjc has a couple of bugs — corner cases where things don’t work quite right. One glaring error occurs when multiple classes have a method with a common name. This happens frequently in Objective-C code where a common parent class is subclassed multiple times. If a parent class P has a method called execute, and there are child classes A and B that reimplement the execute method, REobjc might not (and likely will not) create the proper cross reference to the appropriate subclass reimplementation of the execute method.

That’s a significant bug for a tool with the sole purpose of creating valid cross references. During testing around this bug in REobjc, I found the IDA Pro disassembler suffers from a similar bug. Essentially, REobjc iterates over all the matching execute methods and creates a cross reference to the method with the largest address (in other words, the last method named execute in the target binary). The pseudocode decompiler in IDA Pro seems to suffer from a similar bug. Decompiled code I examined referenced the incorrect class in several cases. I will bring this bug to the attention of the developers at Hex-Rays and provide them with the logic I use to resolve the similar bug in REobjc.

This next issue is not so much a bug as a feature I am working on. The current implementation of REobjc is X64 only. This was by design initially, as I was looking at target programs running on that architecture. In addition, I wanted to focus on making my code work, and architecture portability would only have muddied my development efforts. Future development efforts will continue with the goal of making the code work for ARM AARCH64 architecture. This will let reverse engineers use REobjc on Objective-C binaries using ARM, like those found on Apple’s iPhone, iPad, Apple Watch and Mac Pro.

You can find REobjc added to the existing Duo Labs GitHub idapython repository.

]]>
<![CDATA[RSAC 2018 - Year of the User: Designing Effective Security UX & Software Security Maturity]]> wnather@duo.com(Wendy Nather) https://duo.com/blog/rsac-2018-year-of-the-user-designing-effective-security-ux-and-software-security-maturity https://duo.com/blog/rsac-2018-year-of-the-user-designing-effective-security-ux-and-software-security-maturity Press and Events Thu, 08 Mar 2018 08:30:00 -0500

The RSA Conference in San Francisco is one of those seismic events in information security that you can count on every year. Vendors plan launches; practitioners plan evasion tactics for the expo floor; analysts plan meetings; and journalists brace themselves for … well, nobody’s sure, but there’s always something.

What’s going to be in the word cloud this year? We were hearing (and leading) more discussions around the BeyondCorp security model last February, and there will doubtless be greater attention on it this April, as organizations try to solve the problem of the “crunchy outside and soft, gooey inside.”

In a similar vein, we expect to see a renewed spotlight on identity, whether or not you consider it to be the “new perimeter.” In an environment where the only difference between work and home rests on which login name you type into a third-party SaaS application, you can’t escape the topic.

After years of password dumps, account takeovers and PII breaches, one thing is certain: the password has become Public Enemy Number One. With the benefit of 20/20 hindsight, it was probably not a good idea to rely on human memory as the main authentication factor, but it was cheap and freely available at the time.

As companies start to explore the concept of “passwordless” authentication, we’ll see whether we can eliminate “something you know” from the roster, or whether we end up squeezing the balloon and dealing with a different topology without raising the actual security level. In the meantime, the password manager is an interface that shields users from the malignant growth of password strings by generating new, unique passphrases, and we will probably see increased complexity along with the effort to make it more transparent.

As the consumerization of IT grows, it may soon be time to declare the Year of the User. Customers who are used to slick, entertaining UX designs are less willing to put up with enterprise-grade, get-the-job-done interfaces.

At some point, they will push back on being blamed as the weakest link, and will demand better security in applications without having to be “educated” until their ears bleed just to be able to get their work done. Duo’s own Advocacy Manager, Zoe Lindsey’s talk at RSAC addresses this topic: ‘The System... is People!’: Designing Effective Security UX. If you want a sneak peek, take a look at her webinar.

The flip side of making software more usable for people is making it more secure so they don’t have to worry about it. Many programs tend to focus on the OWASP Top Ten because it’s well-defined and finite — but potential flaws are infinite; you need a maturity program that takes into account your whole production stack, including frameworks, platforms, languages and libraries. Mark Stanislav and Kelby Ludwig are going to lay down some Duo truth tracks in their talk, Realizing Software Security Maturity: The Growing Pains & Gains. And because dog food can be pretty tasty if you do it right, Chris Czub will be talking about how Duo does its own corporate security: Corpsec: What Happened to Corpses A and B?

Just when you think this is all too much, remember that we can face it together. Sharing information, even on a one-to-one basis, helps the security industry as a whole. Join us at our our booth (#1427) to hear about our latest features and pick up some swag. We look forward to hearing what you have to say about this year’s state of security.

]]>
<![CDATA[Duo Mobile: Enhancing Our Commitment to Data Privacy]]> tmccaslin@duo.com(Taylor McCaslin)mhanley@duosecurity.com(Mike Hanley) https://duo.com/blog/duo-mobile-enhancing-our-commitment-to-data-privacy https://duo.com/blog/duo-mobile-enhancing-our-commitment-to-data-privacy Product Updates Thu, 08 Mar 2018 02:30:00 -0500

Below is a letter that was emailed to all Duo administrators on Thursday, March 8, 2018. We have published it here in the spirit of transparency.

At Duo, our goal is to protect your mission. It’s an aspiration that we try to live up to every day through our products, our people and our support. Today, in the spirit of transparency, we wanted to provide insight into a case where we didn’t quite live up to this goal, and what we’ve done about it.

Like many software companies, Duo collects aggregated and pseudonymized usage analytics and performance data that help us understand how our customers are using our products and how we can further improve customer experience and service. These usage analytics include data that you might see in a traditional web analytics tool like pseudonymous data about device characteristics, session details and feature usage.

When we introduced analytic collection into our app, we wanted to provide our mobile users with control over their data privacy. We allowed users to easily opt out of this data analytics collection via a simple toggle in the settings of Duo Mobile. Unfortunately, we recently learned that this toggle was not working as expected and Duo Mobile continued sending usage data, even when users had opted out of this feature.

What Did We Do About It?

This issue was brought to our attention by a security researcher, Erin Ptacek, on February 23, 2018. We activated our standard response procedures and within 12 hours we had created, tested and submitted new builds of Duo Mobile to the Google Play (Duo Mobile for Android 3.19.2) and Apple App Store (Duo Mobile for iOS 3.20.4) that temporarily removed our usage data collection tool, and disabled the 'send usage data' toggle in the settings menu. The revised app was available to all customers within 24 hours of the initial report.

Please note that we have also purged all usage data ever collected from this source since we did not have a clear path to identifying which data had been collected as a result of this bug.

What’s Next?

We sincerely apologize for this oversight in our implementation of this feature. We are currently reevaluating our usage analytics strategy and plan to reintroduce usage analytics collection in a future release of Duo Mobile. Please note that our crash reporting tool was unaffected by this bug and continues to function as expected.

If you have any questions about this issue, or would like more information about our privacy policy, feel free to email privacy@duo.com.

]]>
<![CDATA[Introducing: CloudTracker, an AWS CloudTrail Log Analyzer]]> spiper@duosecurity.com(Scott Piper) https://duo.com/blog/introducing-cloudtracker-an-aws-cloudtrail-log-analyzer https://duo.com/blog/introducing-cloudtracker-an-aws-cloudtrail-log-analyzer Engineering Wed, 07 Mar 2018 08:30:00 -0500

Today we are pleased to announce a new open-source tool from Duo Security for easily analyzing CloudTrail logs from Amazon Web Services (AWS)!

In order to implement the Principle of Least Privileges for our IAM (Identity and Access Management) users and roles in our AWS accounts, we wanted to ensure the IAM privileges these actors were granted were actually being used. Any privileges that have been granted but not used are opportunities to prune down the privileges.

CloudTracker reviews CloudTrail logs to identify the API calls made by an actor and compares this with the IAM privileges that the actor has been granted to identify privileges that can be removed. Check out our CloudTracker tool on Github.

One of the driving motivators for this tool was the realization that as a user assumes roles in an account, or across accounts, it becomes much more tedious to identify what actions they've taken. Amazon advises AWS customers use a multi-account strategy to implement security segmentation, reduce the blast radius in incidents, manage billing, and more.

Duo adheres to this guidance and has many AWS accounts. Duo employees that are allowed to access accounts often do so through cross-account roles, so we had a compelling need to build a tool that could help us identify any unused privileges that had been granted.

For example, imagine you have two users Alice and Bob that use an "admin" role. Their user privileges grant them read access in the account and the ability to assume this "admin" role. Alice uses the privileges granted by this role heavily, creating new EC2 instances, new IAM roles, and all sorts of actions, whereas Bob only uses the privileges granted by this role for one or two specific API calls.

If you looked at what actions were taken by this role, you'd find it was well-used due to Alice, but you wouldn't be able to easily see that Bob was over-privileged. If you want to identify only the actions taken by Bob as the "admin" role, you need to identify every "AssumeRole" call he has made, and then for each session, analyze what was done. This is where a tool was needed to make analysis less tedious and manual.

CloudTrail Logs

CloudTracker uses AWS CloudTrail logs and IAM policy information for an account. CloudTrail records the API calls made in an account, but does have limitations. The most significant is data level actions are not recorded in CloudTrail, such as S3 object access. CloudTrail can be configured to log some of these data level activities, but there are still some AWS API calls that are never recorded in CloudTrail; therefore, CloudTracker can't determine if a user with these privileges is over-privileged.

Instead of using CloudTrail logs, an alternative solution would have used AWS's Access Advisor service, but that has a number of limitations that caused us to focus on CloudTrail logs instead. First, there are no API calls to collect information from Access Advisor. The only way of viewing Access Advisor information is through the web UI. Netflix uses Access Advisor via their Aardvark and RepoKid tools and works around this limitation by using PhantomJS to log in and scrape the data.

Another major limitation with Access Advisor is that you can't trace individual users as they assume roles within an account or across accounts. Finally, the information available from Access Advisor is not very granular. For example, Access Advisor cannot tell you how often a privilege was used, what resources were acted on, or the specific API call used. CloudTracker does not currently take resources into account either, but because its source of truth are CloudTrail logs, it can be modified in the future to display more detailed information and offer tighter privilege restriction advice based on resource attributes.

Mozilla Hindsight and ElasticSearch

CloudTracker requires you to have loaded your CloudTrail logs into ElasticSearch. There are tutorials available on configuring LogStash or other tools to monitor an S3 bucket and continuously feed CloudTrail logs into ElasticSearch, but we realize that not everyone has this setup or wishes to run and maintain a full-time ElasticSearch cluster. You may wish to only use CloudTracker once a quarter and so you'll need to spin up an ElasticSearch cluster, feed in your data, run CloudTracker, and then shut down the cluster.

Unfortunately, many log ingestion tools were made for trailing logs, and not for this use case of quickly ingesting many gigabytes of logs at once. After trialing many of the existing tools for ingesting logs into ElasticSearch, we found Mozilla's Hindsight tool to be the best for this use case. Hindsight is based on lessons learned from Mozilla's previous log ingestion tool, Heka. Along with the release of CloudTrail, we've included in our repo instructions for ingesting a log archive into ElasticSearch using Hindsight.

Ingesting logs into ElasticSearch with Hindsight can still take hours, but once the logs have been loaded into ElasticSearch, running CloudTracker against it takes seconds.

CloudTracker Use Case Examples

From our scenario earlier, let's use CloudTracker against an account with two users, "alice" and "bob" that can each use an "admin" role, and that each have read privileges in the account without assuming a role. Looking at some of the output of CloudTracker, we can see the privileges granted to Alice and whether she has used them or not:

python cloudtracker.py --account demo --user alice
  ...
  cloudwatch:describealarmhistory
  cloudwatch:describealarms
- cloudwatch:describealarmsformetric
- cloudwatch:getdashboard
? cloudwatch:getmetricdata
  ...

This shows that Alice has used some CloudWatch privileges (ex. DescribeAlarms), has not used others (ex. GetDashboard), and it is unknown whether or not she has made calls to GetMetricData because that is not recorded in CloudTrail.

If we now look at the "admin" role, and filter on only the commands that have been used, we can see that there are two calls that have been made with it.

python cloudtracker.py --account demo --role admin --show-used
Getting info for role mfa_admin
  s3:createbucket
  iam:createuser

Looking at the calls made by Alice as an "admin", we see she has used these two calls:

python cloudtracker.py --account demo --user alice --destrole admin --show-used
Getting info on alice, user created 2017-09-01T01:01:01Z
Getting info for AssumeRole into admin
  s3:createbucket
  iam:createuser

But if we look at Bob when he assumed the "admin" role, we can see he only made an S3 call to CreateBucket, and therefore we might want to remove some of the IAM service privileges from him:

python cloudtracker.py --account demo --user bob --destrole admin --show-used
Getting info on bob, user created 2017-10-01T01:01:01Z
Getting info for AssumeRole into admin
  s3:createbucket

Stay in touch!

If you’re interested in the intersection between security and running a highly-available service on AWS, please contact Duo's Production Engineering team at prodeng-ext@duo.com.

]]>
<![CDATA[Decipher: Ushering in a New Era of InfoSec Reporting]]> dugsong@duosecurity.com(Dug Song)jono@duosecurity.com(Jon Oberheide) https://duo.com/blog/decipher-ushering-in-a-new-era-of-infosec-reporting https://duo.com/blog/decipher-ushering-in-a-new-era-of-infosec-reporting Press and Events Tue, 06 Mar 2018 08:30:00 -0500

Take a look at infosecurity-related headlines today, and you’ll probably see:

  • Large-scale PII (personally identifiable information) breach reported at $company
  • $nationstate adversary targeting critical infrastructure in $country
  • A new poorly-branded vulnerability has been announced
  • Yet another Flash 0-day is being exploited in the wild, and so on.

How about the last security advertisement you saw on television or in the airport? Did it involve ominous lines of code, hooded faceless hackers or messages of “it’s already too late?”

Transactional breach-du-jour pieces and scare tactic advertising that spread FUD (fear, uncertainty and doubt) contribute negatively to an industry that is already defeatist in attitude ("the attackers are already in, what are you going to do?," "it's not a matter of if, but when," etc).

We know there’s a better way.

Security Without Fear

At Duo, our mission has always been bigger and broader than being just a vendor delivering security products to protect organizations. To paraphrase the L0pht motto, we want to make a dent in the universe and have an enduring, positive impact on the security industry. We've always felt that there was a larger opportunity and responsibility to inform and entertain through an independent media outlet.

Decipher is a new effort to do just that. Besides literally deciphering what's happening in security, Decipher is intentionally taking a different, more positive approach. We want to highlight the progress that the industry is making as a whole: celebrating the wins, profiling the individuals and teams that are making a difference, and noting the culture and context that makes our industry a special thing to be a part of.

We have an incredible team of editors, journalists, and creators on staff, but also hope to highlight contributors from the community that are already producing amazing content and research.

We hope that Decipher will represent a fresh approach: one that is authentic, approachable, inclusive, positive, and practical in nature.

We're excited and proud to support Decipher in that mission. And we hope you'll join us in fostering a new kind of security media.

-- Dug and Jono

]]>
<![CDATA[Duo Partners With DrFirst to Help Meet EPCS Requirements]]> ubarman@duosecurity.com(Umang Barman) https://duo.com/blog/duo-partners-with-drfirst-to-help-meet-epcs-requirements https://duo.com/blog/duo-partners-with-drfirst-to-help-meet-epcs-requirements Product Updates Mon, 05 Mar 2018 08:30:00 -0500

Summary:

  • Healthcare organizations who want to e-prescribe controlled medications are required to meet EPCS compliance requirements.
  • DrFirst customers can easily meet EPCS requirements for MFA using Duo’s native integration available through DrFirst EPCS Gold
  • Duo’s integration with DrFirst is available on Duo MFA, Duo Access and Duo Beyond editions

EPCS Drivers for Healthcare Organizations

In the past few years, the opioid crisis has worsened. One of the key drivers of the opioid crisis has been fraudulent prescriptions. Healthcare providers, small and large, are investing in technologies, processes and practices to minimize fraudulent prescriptions.

State governments in New York, Maine, Minnesota and Connecticut have stepped up their efforts, and now enforce Electronic Prescriptions of Controlled Substances (EPCS) requirements. Several other states such as Virginia and North Carolina are planning to enforce EPCS requirements by 2020.

One of the key EPCS requirements is to ensure doctors are who they say they are before sending e-prescriptions to pharmacies. Duo offers identity proofing and two-factor authentication (2FA) to help meet EPCS requirements. We launched identity proofing last year, and the functionality is discussed in this blog.

Simplifying EPCS Requirements

EPCS requirements are complex, and IT is often concerned about adding security requirements for doctors. Additional steps take time and reduce productivity. Duo solves this problem by offering an easy-to-use push authentication for e-prescriptions.

To approve prescriptions, doctors simply tap a button on their mobile devices to approve e-prescriptions. With Duo, doctors can experience an easier and simpler workflow.

To make deployments easier, we are excited to announce that Duo is available as a native integration with DrFirst. DrFirst customers can now use Duo as their authentication provider for controlled prescriptions. DrFirst has 67,000+ providers, 1000+ hospitals, 21,000 ambulatory facilities. DrFirst offers an e-prescription module for 304+ EHR vendors.

Deploy Duo Within Minutes

Duo is available as a native integration with DrFirst EPCS Gold, an e-prescribing solution. IT admins can set up Duo with DrFirst in a few minutes and enroll users.

To e-prescribe, physicians start their workflow as usual. They add required prescription medications, specify the dosage and frequency. At the prescription signing screen, doctors are prompted to confirm e-prescription with a Duo Push notification. The process to confirm a prescription takes just a few seconds.

DrFirst Gold EPCS

Today, Duo is used by over 500+ healthcare organizations and 250k doctors worldwide. If you are interested in testing or demoing this integration, please get in touch with your account representative or sign up for a free trial at duo.com/trial.

]]>
<![CDATA[Connect with Duo at SXSW 2018]]> cmccoy@duo.com(Colleen McCoy) https://duo.com/blog/connect-with-duo-at-sxsw-2018 https://duo.com/blog/connect-with-duo-at-sxsw-2018 Press and Events Mon, 05 Mar 2018 02:05:00 -0500

We’re excited to be part of SXSW for another year, diving into all of the cool conferences, exhibitions and networking opportunities that this quintessentially Austin event has to offer. If you’ll be there, consider this your invitation to join us at the variety of events we're hosting and participating in – come check us out!

SXSW 2018 kicks off Friday, March 9, and we’ll be off to an exciting start first thing in the morning. Head to the iconic Antone’s club, where Duo Principal Security Strategist Wendy Nather will be a panelist in the talk Practicing Real Security in a Dangerous World from 8:30-10:30 AM. Hosted by Leo Laporte (The Tech Guy on Premiere radio nationwide, founder and host, the TWiT Netcast Network) and joined by Stacey Higginbotham (journalist and host of the IoT Podcast) and Beau Woods (Cyber Safety Innovation Fellow, Scowcroft Center for Strategy and Security), these industry luminaries will discuss the state of consumer security. No RSVP or badge is required – just mark your calendar, grab a coffee, and come on down.

We’ll give you a couple hours to explore and rest, and then as day turns into night the fun really begins! First up, visit our booth at Capital Factory on Friday from 4-8PM. As the official starting point for the Startup Crawl, Capital Factory is a great chance to network with tons of tech professionals under one roof. A variety of teams and departments from Duo will be on hand to tell you about who we are and what we do, and give you the lowdown on the positions we’re hiring for. Plus, if you have a VIP badge you’ll get some awesome exclusive Duo swag!

Once you’ve filled your swag bag and stopped by our booth, head over to our new Austin office. We’ll open our doors for the crawl at 5PM, where you can tour our new stomping grounds while enjoying some live music and refreshments. The party don’t stop… as long as you RSVP here.

While the main events we’re hosting happen Friday, you can find us doing speaking engagements throughout the week. On Saturday, March 10 from 12:30-3PM, we’re teaming up with Voltage Control on an Inclusion and Diversity Panel, featuring our own Diversity and Inclusion Manager Trey Boynton sharing her expertise on why diversity and inclusion is the secret sauce for thriving teams. Along with thought leaders from companies like Atlassian and Uber, the panel addresses everything from getting started with D&I, all the way up to highlighting companies with established D&I programs and what they’re doing right.

How do recruiters successfully scale a startup company while overcoming the obstacles of major growth, supply and demand, and attracting top talent? Hear how our Recruiting Operations Manager Jasmine Burns has helped scale Duo on a panel of Scaling Mobility Startups in Metro Detroit Monday, March 12 from 10-11AM. Other top companies that have scaled successfully in Michigan, like May Mobility and GM’s Maven, will share their stories, secrets, and struggles – as well as their triumphs – for recruiting and retaining hard-to-find talent.

As you prepare for SXSW and make your way to Austin, you can refer to this page to see everything we’re up to. We hope to see y’all there!

]]>
<![CDATA[The Latest Duo Solutions for Healthcare Security at HIMSS 2018]]> noelle@duo.com(Noelle Skrzynski) https://duo.com/blog/the-latest-duo-solutions-for-healthcare-security-at-himss-2018 https://duo.com/blog/the-latest-duo-solutions-for-healthcare-security-at-himss-2018 Press and Events Fri, 02 Mar 2018 08:30:00 -0500

Duo is excited to attend the HIMSS Annual Conference & Exhibition, hosted by the Healthcare Information and Management Systems Society (HIMSS), for its third year in a row! This year, the conference will be held Mar. 5-9 at the Sands Expo in Las Vegas.

Joining us there are more than 40,000 health IT professionals, clinicians, executives and vendors from around the world, for the conference’s more than 300 programs spanning keynotes, thought leader sessions, roundtable discussions and workshops.

We hope you’ll join us, too — at least for a visit at booth #12649 (level 1, hall G) or kiosk #8500-14 (level 2 Veronese in the Cyber Command Center) — we’ll be showing off features of the Duo Beyond edition — two-factor authentication, endpoint security, secure single sign-on, and remote identity-proofing solutions for improving and enabling security in the healthcare industry.

You can find us at either the booth or kiosk during these hours:

  • Tuesday, March 6 from 9:30 a.m. - 6 p.m. PST
  • Wednesday, March 7 from 9:30 a.m. - 6 p.m. PST
  • Thursday, March 8 from 9:30 a.m. - 4 p.m.

Stop by for a hello, answers to your burning security questions, and for some sweet free swag!

In addition to our booth and kiosk hours, Duo will host a talk on BeyondCorp for Healthcare on Thursday, March 8 from noon to 12:20 p.m. at the Cyber Command Center.

The talk will focus on how BeyondCorp fits into the security framework of healthcare organizations, why hundreds of healthcare organizations choose Duo as their top three key security tools, and how Duo helps healthcare organizations meet their HIPAA and EPCS authentication compliance requirements.

Compliance and Protecting Patient Data With Duo

To help reduce the risk of unauthorized access to patient data and streamline authentication for healthcare professionals, Duo provides information security solutions that integrate with electronic healthcare record systems (EHRs) and offers secure two-factor authentication that’s quick and easy for busy healthcare professionals to use. Duo users can choose from a variety of authentication methods, such as:

  • Single Sign-On - Securely access all enterprise cloud applications by logging into a web portal once.
  • Duo Push - Send a push notification to your device, and log in by taping ‘Approve.’
  • Phone Callback - Call a phone, then log in by answering and pressing a key.

The HIPAA Security Rule guidelines on accessing electronic protected health information (ePHI) recommend using two-factor authentication to mitigate the risk of lost or stolen credentials that could result in unauthorized access to ePHI.

Two-factor authentication is also required by the Drug Enforcement Agency’s mandates for issuing e-prescriptions. Practitioners must use two forms of identification for identity-proofing, to sign and verify digital prescriptions.

Unfortunately, verifying doctors’ identities can be a tedious process and can take weeks or months. Duo now makes remote identity-proofing easier - we’ve partnered with Identity.com to conform with Electronic Prescription of Controlled Substance (EPCS) compliance requirements for identity (ID) proofing within minutes.

Duo also offers comprehensive endpoint visibility, providing data into who and what type of devices are accessing your systems and applications, enabling you to establish more granular access controls. With these customizable policy and controls, you can notify users to update their devices or block at-risk devices from connecting to your network, reducing the risk of transferring malware or allowing external attackers to exploit vulnerabilities to breach your company. Learn more about Duo's solutions for Healthcare.

Altegra Health Case Study

With Duo Security, Altegra Health was able to deploy two-factor authentication to cover Virtual Desktop Infrastructure (VDI) desktops. Duo offered Altegra Health a much easier deployment process, inexpensive overhead costs and minimal strain to their small support team.

Duo alleviated Altegra's concerns about network connectivity issues by offering a variety of different authentication methods, including SMS-based passcodes and phone callbacks for authenticating while offline.

Our admins love Duo's easy and intuitive administrative panel. Our users like that it doesn’t disrupt their workflow more than necessary. — Mark Kueffner, Senior Director of IT Systems Architecture & Operations

Read the full Altegra Health case study, and browse our other case studies.

Healthcare Information Security Guide

Healthcare Information Security Guide New security risks may pose a threat to the privacy and security of patient data. To get an understanding of the latest themes and issues in healthcare information security today, Duo’s put together this Healthcare Information Security Guide.

In this guide, you’ll find:

  • a collection of the most relevant articles on healthcare security
  • a summary of the HHS’s guide to preventing ransomware
  • information security basics to reduce threats to patient data

Download the free guide today.

Guide to Securing Patient Data

Guide to Securing Patient Data Duo developed this guide to examine some of the ways that patient data can be vulnerable and how you can protect it. To learn more about patient data security, download Duo Security's Guide to Securing Patient Data: Breach Prevention Doesn’t Have to Be Brain Surgery.

To help you navigate patient data security, our guide will:

  • summarize relevant health IT security legislation, including federal and state
  • provide information security guidelines on remote access risks and solutions
  • provide extensive security resources and a real hospital case study
  • explain how to protect against modern attacks and meet regulatory compliance with two-factor authentication

Ideal for CISOs, security, compliance and risk management officers, IT administrators and other professionals concerned with information security, this guide is for IT decision-makers that need to implement strong authentication security, as well as those evaluating two-factor authentication solutions for organizations in the healthcare industry.

Download the free guide today.

]]>
<![CDATA[New Office of Cybersecurity Proposed in Response to Attacks on U.S. Energy & Critical Infrastructure]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/new-office-of-cybersecurity-proposed-in-response-to-attacks-on-us-energy-and-critical-infrastructure https://duo.com/blog/new-office-of-cybersecurity-proposed-in-response-to-attacks-on-us-energy-and-critical-infrastructure Industry News Wed, 28 Feb 2018 08:30:00 -0500

In response to attacks against power companies and at least one nuclear plant last year, the U.S. Department of Energy (DOE) is establishing an Office of Cybersecurity, Energy Security and Emergency Response (CESER).

Dept. of Energy Cybersecurity Initiatives

According to the DOE, the office will be funded by $96 million according to the latest fiscal year 2019 budget proposal. The DOE's budget request (PDF) asked for funding in order to:

  • Prevent and address cyberattacks on the energy sector, securing the DOE enterprise
  • Research and development for the electric grid and energy sector cybersecurity
  • DOE enterprise cybersecurity risk management
  • To establish a separate account for the Office of Cybersecurity, Energy Security and Emergency Response (CESER)

However, as The Hill notes, the budget is a proposal and Congress has final say on funding levels, and will decide on whether or not to fund the new cybersecurity office.

Attacks Targeting U.S. Energy and Critical Infrastructure

Last October, the Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI) issued a technical US-CERT (Computer Emergency Readiness Team) alert to inform government agencies and organizations in the energy, nuclear, water, aviation and critical manufacturing sectors about an ongoing "advanced persistent threat" campaign that started since at least May 2017.

The alert references research conducted by Symantec - Dragonfly: Western Energy Sector Targeted by Sophisticated Attack Group.

The targets include:

  • Staging targets - Trusted third-party suppliers with less secure networks
  • Intended targets - Main networks of targeted organizations

Threat actors would start by conducting reconnaissance via publicly-available information on network and organizational design, control system capabilities, anything posted to company websites that may contain operationally sensitive information.

In one scenario, the threat actors zoomed and enhanced on a high-resolution photo found on a company's human resources page to reveal control systems equipment models and status information in the background, according to US-CERT.

Tactics Used Against Third-Parties

They also accessed third-party networks via their websites, remote email access portals and virtual private networks (VPNs).

In addition, they sent targeted phishing emails to third-parties with attachments that attempted to retrieve documents from the remote server using Microsoft's Server Message Block (SMB) protocol to, in turn, to get access to the user's credential hash, then obtain the plaintext password that let them log in as authorized users.

Tactics Used Against Intended Targets

The phishing email campaign against the larger, intended targeted organizations included:

  • Subject lines that disguised the email as a contract agreement - "AGREEMENT & Confidential"
  • Email messages referring to control or process control systems and common industrial control equipment and protocols
  • PDF attachments disguised as industrial control systems personnel resumes, invites and policy docs
  • A malicious link that persuaded users to click on it should a download not automatically begin - led to a website with a malicious file

They also used malicious .docx files to collect user credentials, and compromised websites likely to be visited by those in the energy sector to set up watering hole attacks to steal credentials.

According to Symantec, the threat actors also used the lure of fake Flash updates in order to convince users to visit specific websites, allowing threat actors to install backdoors on their target networks.

Security Best Practices

The CERT alert states that the threat actors used compromised credentials to access victims' networks where multi-factor authentication (MFA) was not used.

Obviously, implement MFA everywhere to avoid this threat, as well as use a secure method like a U2F security token when/wherever possible. The alert recommends:

Use two-factor authentication for all authentication, with special emphasis on any external-facing interfaces and high-risk environments (e.g., remote access, privileged access, and access to sensitive data).

US-CERT also recommends monitoring VPN logs for abnormal activity, like off-hour logins, unauthorized IP address logins, concurrent logins, etc.

To enforce remote access controls for corporate applications, use a secure access solution that allows you to set application-specific policies based on the trust of your users, devices and risk attributes of the login request.

In the campaign against energy and critical infrastructure organizations above, threat actors were able to log into networks as authorized users, rendering them indistinguishable from the rest of your trusted users.

To establish a zero-trust environment, you need a way to verify your users' identities and devices that isn't reliant on their network location (since you can't trust everyone on your network).

Stronger authentication, secure single sign-on, device certificates and security hygiene checks can help you take the first steps toward implementing this new enterprise security model.

See a full list of the general best practices in Advanced Persistent Threat Activity Targeting Energy and Other Critical Infrastructure Sectors.

]]>
<![CDATA[Duo Finds SAML Vulnerabilities Affecting Multiple Implementations]]> kludwig@duo.com(Kelby Ludwig) https://duo.com/blog/duo-finds-saml-vulnerabilities-affecting-multiple-implementations https://duo.com/blog/duo-finds-saml-vulnerabilities-affecting-multiple-implementations Duo Labs Tue, 27 Feb 2018 09:00:00 -0500

This blog post describes a new vulnerability class that affects SAML-based single sign-on (SSO) systems. This vulnerability can allow an attacker with authenticated access to trick SAML systems into authenticating as a different user without knowledge of the victim user’s password.

Duo Labs, the advanced research team of Duo Security, has identified multiple vendors that were affected by this flaw:

  • OneLogin - python-saml - CVE-2017-11427
  • OneLogin - ruby-saml - CVE-2017-11428
  • Clever - saml2-js - CVE-2017-11429
  • OmniAuth-SAML - CVE-2017-11430
  • Shibboleth - CVE-2018-0489
  • Duo Network Gateway - CVE-2018-7340

We recommend that individuals that rely on SAML-based SSO to update any affected software to patch this vulnerability. If you are a Duo Security customer running Duo Network Gateway (DNG), please see our Product Security Advisory here.

SAML Responses, Briefly

The Security Assertion Markup Language, SAML, is a popular standard used in single sign-on systems. Greg Seador has written a great pedagogical guide on SAML that I highly recommend if you aren't familiar with it.

For the purpose of introducing this vulnerability, the most important concept to grasp is what a SAML Response means to a Service Provider (SP), and how it is processed. Response processing has a lot of subtleties, but a simplified version often looks like:

  • The user authenticates to an Identity Provider (IdP) such as Duo or GSuite which generates a signed SAML Response. The user’s browser then forwards this response along to an SP such as Slack or Github.

  • The SP validates the SAML Responses signature.

  • If the signature is valid, a string identifier within the SAML Response (e.g. the NameID) will identify which user to authenticate.

A really simplified SAML Response could look something like:

<SAMLResponse>
    <Issuer>https://idp.com/</Issuer>
    <Assertion ID="_id1234">
        <Subject>
            <NameID>user@user.com</NameID>
        </Subject>
    </Assertion>
    <Signature>
        <SignedInfo>
            <CanonicalizationMethod Algorithm="xml-c14n11"/>
            <Reference URI="#_id1234"/>
        </SignedInfo>
        <SignatureValue>
            some base64 data that represents the signature of the assertion
        </SignatureValue>
    </Signature>
</SAMLResponse>

This example omits a lot of information, but that omitted information is not too important for this vulnerability. The two essential elements from the above XML blob are the Assertion and the Signature element. The Assertion element is ultimately saying "Hey, I, the Identity Provider, authenticated the user user@user.com." A signature is generated for that Assertion element and stored as part of the Signature element.

The Signature element, if done correctly, should prevent modification of the NameID. Since the SP likely uses the NameID to determine what user should be authenticated, the signature prevents an attacker from changing their own assertion with the NameID "attacker@user.com" to "user@user.com." If an attacker can modify the NameID without invalidating the signature, that would be bad (hint, hint)!

XML Canononononicalizizization: Easier Spelt Than Done

The next relevant aspect of XML signatures is XML canonicalization. XML canonicalization allows two logically equivalent XML documents to have the same byte representation. For example:

<A X="1" Y="2">some text<!-- and a comment --></A>

and

< A Y="2" X="1" >some text</ A >

These two documents have different byte representations, but convey the same information (i.e. they are logically equivalent).

Canonicalization is applied to XML elements prior to signing. This prevents practically meaningless differences in the XML document from leading to different digital signatures. This is an important point so I'll emphasize it here: multiple different-but-similar XML documents can have the same exact signature. This is fine, for the most part, as what differences matter are specified by the canonicalization algorithm.

As you might have noticed in the toy SAML Response above, the CanonicalizationMethod specifies which canonicalization method to apply prior to signing the document. There are a couple of algorithms outlined in the XML Signature specification, but the most common algorithm in practice seems to be http://www.w3.org/2001/10/xml-exc-c14n# (which I'll just shorten to exc-c14n).

There is a variant of exc-c14n that has the identifier http://www.w3.org/2001/10/xml-exc-c14n#WithComments. This variation of exc-c14n does not omit comments, so the two XML documents above would not have the same canonical representation. This distinction between the two algorithms will be important later.

XML APIs: One Tree; Many Ways

One of the causes of this vulnerability is a subtle and arguably unexpected behavior of XML libraries like Python’s lxml or Ruby’s REXML. Consider the following XML element, NameID:

<NameID>kludwig</NameID>

And if you wanted to extract the user identifier from that element, in Python, you may do the following:

from defusedxml.lxml import fromstring
payload = "<NameID>kludwig</NameID>"
data = fromstring(payload)
return data.text # should return 'kludwig'

Makes sense, right? The .text method extracts the text of the NameID element.

Now, what happens if I switch things up a bit, and add a comment to this element:

from defusedxml.lxml import fromstring
doc = "<NameID>klud<!-- a comment? -->wig</NameID>"
data = fromstring(payload)
return data.text # should return ‘kludwig’?

If you would expect the exact same result regardless of the comment addition, I think you are in the same boat as me and many others. However, the .text API in lxml returns klud! Why is that?

Well, I think what lxml is doing here is technically correct, albeit a bit unintuitive. If you think of the XML document as a tree, the XML document looks like:

element: NameID
|_ text: klud
|_ comment: a comment?
|_ text: wig

and lxml is just not reading text after the first text node ends. Compare that with the uncommented node which would be represented by:

element: NameID
|_ text: kludwig

Stopping at the first text node in this case makes perfect sense!

Another XML parsing library that exhibits similar behavior is Ruby's REXML. The documentation for their get_text method hints at why these XML APIs exhibit this behavior:

[get_text] returns the first child Text node, if any, or nil otherwise. This method returns the actual Text node, rather than the String content.

Stopping text extraction after the first child, while unintuitive, might be fine if all XML APIs behaved this way. Unfortunately, this is not the case, and some XML libraries have nearly identical APIs but handle text extraction differently:

import xml.etree.ElementTree as et
doc = "<NameID>klud<!-- a comment? -->wig</NameID>"
data = et.fromstring(payload)
return data.text # returns 'kludwig'

I have also seen a few implementations that don’t leverage an XML API, but do text extraction manually by just extracting the inner text of a node’s first child. This is just another path to the same exact substring text extraction behavior.

The Vulnerability

So now we have the three ingredients that enable this vulnerability:

  • SAML Responses contain strings that identify the authenticating user.

  • XML canonicalization (in most cases) will remove comments as part of signature validation, so adding comments to a SAML Response will not invalidate the signature.

  • XML text extraction may only return a substring of the text within an XML element when comments are present.

So, as an attacker with access to the account user@user.com.evil.com, I can modify my own SAML assertions to change the NameID to user@user.com when processed by the SP. Now with a simple seven-character addition to the previous toy SAML Response, we have our payload:

<SAMLResponse>
    <Issuer>https://idp.com/</Issuer>
    <Assertion ID="_id1234">
        <Subject>
            <NameID>user@user.com<!---->.evil.com</NameID>
        </Subject>
    </Assertion>
    <Signature>
        <SignedInfo>
            <CanonicalizationMethod Algorithm="xml-c14n11"/>
            <Reference URI="#_id1234"/>
        </SignedInfo>
        <SignatureValue>
            some base64 data that represents the signature of the assertion
        </SignatureValue>
    </Signature>
</SAMLResponse>

How Does This Affect Services That Rely on SAML?

Now for the fun part: it varies greatly!

The presence of this behavior is not great, but not always exploitable. SAML IdPs and SPs are generally very configurable, so there is lots of room for increasing or decreasing impact.

For example, SAML SPs that use email addresses and validate their domain against a whitelist are much less likely to be exploitable than SPs that allow arbitrary strings as user identifiers.

On the IdP side, openly allowing users to register accounts is one way to increase the impact of this issue. A manual user provisioning process may add a barrier to entry that makes exploitation a bit more infeasible.

Remediation

Remediation of this issue somewhat depends on what relationship you have with SAML.

For Users of Duo’s Software

Duo has released updates for the Duo Network Gateway in version 1.2.10. If you use the DNG as a SAML Service Provider and are not at version 1.2.10 or higher (at the time of writing this, 1.2.10 is the latest version), we recommend upgrading.

Learn more in Duo’s Product Security Advisory (PSA) for this vulnerability.

If You Run or Maintain an Identity Provider or Service Provider

The best remediation is to ensure your SAML processing libraries are not affected by this issue. We identified several SAML libraries that either leveraged these unintuitive XML APIs or did faulty manual text extraction, but I'm sure there are more libraries out there that don't handle comments in XML nodes well.

Another possible remediation could be defaulting to a canonicalization algorithm such as http://www.w3.org/2001/10/xml-exc-c14n#WithComments which does not omit comments during canonicalization. This canonicalization algorithm would cause comments added by an attacker to invalidate the signature, but the canonicalization algorithm identifier itself must not be subject to tampering. This modification, however, would require IdP and SP support, which may not be universal.

Additionally, if your SAML Service Provider enforces two-factor authentication, that helps a lot because this vulnerability would only allow a bypass of a user’s first factor of authentication. Note that if your IdP is responsible for both first factor and second factor authentication, it’s likely that this vulnerability bypasses both!

If You Maintain a SAML Processing Library

The most obvious remediation here is ensuring your SAML library is extracting the full text of a given XML element when comments are present. Most SAML libraries I found had some form of unit tests, and it was fairly easy to update the tests which extracted properties like NameIDs and just add comments to pre-signed documents. If the tests continue to pass, great! Otherwise, you may be vulnerable.

Another possible remediation is updating libraries to use the canonicalized XML document after signature validation for any processing such as text extraction. This could prevent this vulnerability as well as other vulnerabilities that could be introduced by XML canonicalization issues.

If You Maintain an XML Parsing Library

Personally, I think the number of libraries affected by this vulnerability suggest that many users also seem to assume XML inner text APIs are not affected by comments, so that could be a motivating factor to change an API’s behavior. However, I don't think there is a clear right answer for XML library authors, and a very reasonable action may be keeping the APIs as they are and improving documentation surrounding this behavior.

Another possible remediation path is improving the XML standards. Through my research, I did not identify any standards that specified the correct behavior, and it may be worth specifying how these related standards should interoperate.

Disclosure Timeline

Our disclosure policy can be found here https://www.duo.com/labs/disclosure. In this case, due to the vulnerability impacting multiple vendors, we decided to work with CERT/CC to coordinate disclosure. The following is a high-level disclosure timeline:

Date Activity
2017-12-18 CERT/CC contacted and provided vulnerability information.
2017-12-20 CERT/CC follows up with questions about the issue.
2017-12-22 Response to questions from CERT/CC.
2018-01-02 to 2018-01-09 Additional email discussion with CERT/CC about the issue.
2018-01-24 CERT/CC completes internal analysis and contacts impacted vendors.
2018-01-25 Vendors acknowledge CERT/CC report. Additional communication with CERT/CC and vendors to further explain the issue and other attack vectors.
2018-01-29 Additional vendor that is potentially impacted identified and contacted by CERT/CC.
2018-02-01 Duo Labs reserves CVE #s for each impacted vendor.
2018-02-06 Draft of CERT/CC vulnerability technical note reviewed and approved by Duo.
2018-02-20 Final confirmation that all impacted vendors are ready for disclosure date.
2018-02-27 Disclosure.

We would like to thank CERT/CC for helping us disclose this vulnerability and we appreciate all of the efforts put forth by each and everyone who was contacted by CERT/CC to quickly respond to this issue.

]]>