When Twitter officials announced late last week that they had discovered a problem in the company’s systems that required all of its users to reset their passwords, it raised a lot of questions from both users and security experts. The questions mainly have centered on how this problem could have occurred, but the more interesting ones concern Twitter’s response.
The problem that Twitter disclosed on May 3 is a relatively common one. Sometime recently, the company’s security team noticed that user passwords were being logged in plaintext before going through hashing, which would obscure them. Here’s how Twitter CTO Parag Agrawal explained it:
“We mask passwords through a process called hashing using a function known as bcrypt, which replaces the actual password with a random set of numbers and letters that are stored in Twitter’s system. This allows our systems to validate your account credentials without revealing your password. This is an industry standard,” he wrote in a post explaining the problem.
“Due to a bug, passwords were written to an internal log before completing the hashing process. We found this error ourselves, removed the passwords, and are implementing plans to prevent this bug from happening again.”
Companies such as Twitter that handle large volumes of user data typically have sophisticated security programs with multiple layers of protection and detection designed to prevent and alert on new attacks. They also have internal systems that can help find issues with the way that user data is handled and protected, and when those systems identify a problem, security teams usually simply fix it and go on about their business. Twitter’s team is fixing the problem that led to the password-logging issue, but rather than just moving along, the company also chose to disclose the bug and tell users what happened.
This is not typical behavior, especially in the current climate, which dictates that all security incidents come complete with a cycle of outrage and blame. Plenty of incidents involve mistakes or intentional choices that probably should incite some level of anger from the people affected, and that’s why companies don’t often go public with these incidents unless they’re legally obligated to do so. The disincentives to admit a security mistake are enormous.
Meanwhile, there are almost no incentives for companies to make the choice that Twitter made. You can be sure that there were a number of meetings at Twitter HQ in the days leading up to the announcement of the password bug in which executives and PR folks discussed the public reaction that they knew would come their way once word of the problem hit. If the last 15 years have taught enterprise executive teams anything, it’s that the Internet loves nothing more than a fresh target. And yet Twitter decided to own up publicly to its mistake, a choice that had very little potential upside.
In some ways Twitter was protected from the downside of this choice by its sheer size. The risk of Twitter losing a number of users or partners that would make a difference as a result of disclosing this problem was essentially zero. Hundreds of millions of people use the service on a daily basis and rely on it for news and connections. That’s not going to change just because people had to reset their passwords. It’s just not. But that fact shouldn’t detract from the value of what Twitter did. Publicly admitting that even highly sophisticated and evolved security teams can and do make this kind of mistake not only shows users and customers that the company is willing to be transparent on these issues, but also shows other companies that being honest and upfront about missteps should be the rule rather than the exception.