<![CDATA[Decipher]]> https://decipher.sc Decipher is an independent editorial site that takes a practical approach to covering information security. Through news analysis and in-depth features, Decipher explores the impact of the latest risks and provides informative and educational material for readers curious about how security affects our world. Fri, 29 May 2020 00:00:00 -0400 en-us info@decipher.sc (Amy Vazquez) Copyright 2020 3600 <![CDATA[NSA Warns Russian Attackers are Exploiting Old Exim Flaw]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/nsa-warns-russian-attackers-are-exploiting-old-exim-flaw https://duo.com/decipher/nsa-warns-russian-attackers-are-exploiting-old-exim-flaw Fri, 29 May 2020 00:00:00 -0400

A notorious and highly capable attack group that is part of the Russian intelligence community has been exploiting a known vulnerability in the Exim mail transfer agent for several months, compromising unpatched servers in the United States as part of its intrusion campaigns, the National Security Agency said in a rare advisory.

The warning from the NSA attributes the attacks to the group known as Sandworm, a team that is part of Russia’s General Staff Main Intelligence Directorate (GRU) military intelligence organization, and is allegedly responsible for some of the more damaging attacks in recent years. Sandworm has been linked to the attack that caused a major power outage in Ukraine in 2015, the NotPetya attack that paralyzed hospitals, shipping companies, and dozens of organizations around the world in 2017, and several smaller intrusions, as well. The group tends to focus much of its attention on entities in Ukraine, but on Thursday the NSA warned enterprises and other potential targets that Sandworm is using the Exim vulnerability, which was disclosed an patched in June 2019, to gain a foothold on target networks.

“The Russian actors, part of the General Staff Main Intelligence Directorate’s (GRU) Main Center for Special Technologies (GTsST), have used this exploit to add privileged users, disable network security settings, execute additional scripts for further network exploitation; pretty much any attacker’s dream access – as long as that network is using an unpatched version of Exim MTA,” the NSA advisory says.

“When the patch was released last year, Exim urged its users to update to the latest version. NSA adds its encouragement to immediately patch to mitigate against this still current threat.”

The Exim vulnerability (CVE-2019-10149) is an unusual one, and the main attack vector is a local one. But there is a remote exploitation method, too, and that’s what the Sandworm team appears to be using. Exim is a mail transfer agent that is used widely on Unix and Linux systems and it’s included in several Linux distributions.

“To remotely exploit this vulnerability in the default configuration, an attacker must keep a connection to the vulnerable server open for 7 days (by transmitting one byte every few minutes). However, because of the extreme complexity of Exim's code, we cannot guarantee that this exploitation method is unique; faster methods may exist,” the advisory from Qualys, which discovered the bug, said.

"They know for a fact that the Russians are using this exploit to get into networks with some success."

Researchers at GreyNoise Intelligence, which gathers data on scan activity, have seen exploits against the Exim vulnerability since it was first disclosed and the activity has been relatively steady ever since, save for a spike in September. Andrew Morris, the founder of GreyNoise, said the exploit activity shows the characteristics of selective targeting, rather than mass exploitation.

“This is an exploit that we know isn’t being aggressively wormed in the way that some others are. It’s not being thrown into botnets. It’s quiet,” Morris said.

“It’s being used more manually and selectively by the bad guys. There’s more target checking and verification. This does not appear to have been weaponized in that way.”

GreyNoise's data shows 165 servers that have tried to exploit the Exim vulnerability against their systems, and many of those servers are in the U.S. or China. Only 11 of the servers are located in Russia.

A public warning from the NSA about a specific vulnerability is unusual and carries with it a weight that a similar advisory from a threat intelligence company or even another government agency does not. The agency does not publish this kind of advisory very often, and when it does, there is a specific reason behind it.

“The NSA does this for a reason. They know for a fact that the Russians are using this exploit to get into networks with some success,” Morris said. “They have a culture of silence and they do not put things like this out there without a reason.”

]]>
<![CDATA[Decipher Podcast: Alex Pinto]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/decipher-podcast-alex-pinto-2020 https://duo.com/decipher/decipher-podcast-alex-pinto-2020 Thu, 28 May 2020 00:00:00 -0400

]]>
<![CDATA[Malware Infects NetBeans Projects In Software Supply Chain Attack]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/malware-infects-netbeans-projects-in-software-supply-chain-attack https://duo.com/decipher/malware-infects-netbeans-projects-in-software-supply-chain-attack Thu, 28 May 2020 00:00:00 -0400

The Octopus Scanner malware has compromised 26 open source projects hosted on GitHub in a new supply chain attack, GitHub Security Lab said.

The investigation began on March 9 when security researcher “JJ” notified GitHub’s Security Incident Response Team about GitHub repositories actively serving malware. The malware was designed to compromise NetBeans projects, and all affected projects were serving backdoored code, GitHub’s security researchers found. The project owners were unaware their projects had been compromised.

"The malware is capable of identifying the NetBeans project files and embedding malicious payload both in project files and build JAR files," JJ wrote.

The developer’s computer would be infected by Octopus Scanner after the developer forked or cloned a compromised repository, said GitHub security researcher Alvaro Muñoz in a blog post describing the investigation. The malware executed only on machines where the developer used the NetBeans IDE (a cross-platform integrated development environment used to write Java applications), and did nothing if no NetBeans projects could be found. The first thing the malware did on the machine was to look for the NetBeans directory in order to enumerate all the projects in that directory. The malware then changed the configuration file to ensure that the payload—a dropper—was injected into the resulting JAR binary every time a project was built.

Executing the resulting JAR file on the machine would give the malware “local system persistence” on the developer’s machine, and the dropper would install a remote access Trojan (RAT) to communicate with the command-and-control server to receive further instructions. The malware’s command-and-control servers were already unavailable by the time GitHub began its investigation, so the research team was unable to determine the type of tasks the attackers performed on the compromised developer machines.

By infecting NetBeans projects, the malware inserts backdoors into previously clean projects. The malware spreads when developers release the code, or commit the code to repositories and other developers pick up that code.

GitHub identified four samples of the malware during the course of its investigation. The initial version may have been submitted to VirusTotal back in August 2018, and was designed to “only spread through tainted repository cloning and building.” The later versions came with new features and capabilities, including the ability to “spread when any of the resulting build artifacts are loaded and used.”

Octopus Scanner prevented new builds from replacing the compromised one by keeping malicious build artifacts.

"Infecting build artifacts is a means to infect more hosts since the infected project will most likely get built by other systems and the build artifacts will probably be loaded and executed on other systems as well," Muñoz said.

By directly targeting developers, the attackers behind the Octopus Scanner malware could potentially access proprietary information, such as details about the projects the developers were working on, specifics about the production environments, and sensitive information such as database credentials. Once the developers committed the backdoored code into their repositories, the attackers would have access to critical systems within the organization.

The fact that the malware targeted NetBeans suggests this may have been a targeted attack against specific developers, since NetBeans is not really widely used among Java developers. Alternatively, the attackers may have already implemented the malware for more popular build systems such as Make, MsBuild, and Gradle, and was expanding the malware's footprint to include all other build systems.

"If malware developers took the time to implement this malware specifically for NetBeans, it means that it could either be a targeted attack, or they may already have implemented the malware for build systems such as Make, MsBuild, Gradle and others as well and it may be spreading unnoticed," Muñoz said.

Supply chain attacks undermine the integrity of components used. A software supply chain introduces the malicious code by injecting it into the tools used to create the application. It also gives attackers wider reach, because anyone who uses the modified tool winds up helping the malware’s spread. A software supply chain attack affects the integrity of the entire software development and delivery ecosystem. If the integrity of any one of the steps is weakened, the entire ecosystem is affected.

Attackers have in the past introduced backdoors in compiled code by distributing a tampered version of Apple's XCode development environment, tampering with software updates, or directly hijacking the update utility itself. There have been many examples of attackers uploading a malicious library with a name similar to a well-known package to a package managers such as npm and pypi.

The method Octopus Scanner used to compromise the build process is “both interesting and concerning,” said Muñoz. It gives the malware an "effective means of transmission" since the affected projects will get cloned, forked, and used on many different systems. "The actual artifacts of these builds may spread even further in a way that is disconnected from the original build process and harder to track down after the fact," he said.

However, software supply chain attacks tend to be rare. It is far easier for attackers to target unpatched vulnerabilities in software than to go after developer tools.

]]>
<![CDATA[Analysis of DNS Traffic Uncovers DDoS Attacks]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/analysis-of-dns-traffic-uncovers-ddos-attacks https://duo.com/decipher/analysis-of-dns-traffic-uncovers-ddos-attacks Wed, 27 May 2020 00:00:00 -0400

Internet usage in 2020 is shaping up to be very different from how it was at the end of 2019. New research in DNS traffic shows where people have been spending their time online and uncovered previously unknown distributed denial of service attacks.

In an analysis of passive DNS cache miss levels for 316 online sites over a two month period, there was a massive “step up” in traffic volumes, Farsight Security said in its latest report. The company looked at daily DNS transactions across five industries—travel, transportation, retail, streaming video, higher education, and news and opinion sites—and found that many sites had an increase of as much as seven times the number of domain requests, suggesting that attackers may have attempted massive denial-of-service attacks during the study period.

DNS cache misses occur when the data fetched is not present in the cache—meaning there was a request for the address of a domain that was not present in the name server’s store of addresses. Since most users visit a limited number of sites regularly—such as Google, Facebook, Amazon, and Netflix—DNS cache makes looking up addresses more efficient for popular sites. It doesn't make sense for the ISP to repeatedly look up the same sites, so it stores the frequently-requested-queries in the local cache. Farsight Security used its DNSDB platform to count each day's DNS cache misses.

If the user's query is one for a name that hasn't been seen and cached recently, the recursive resolver must then chase down the information the user requires," the report said. "That's called a 'cache miss.'

Farsight Security said the volume of misses increased by four to seven times at the end of March and the beginning of April. Mid-to-late March, when the shift in DNS traffic became obvious in the data, coincides with the period when many states and countries issued stay-at-home orders due to the novel coronavirus pandemic. More employees worked from home, and many were laid off or furloughed from their jobs. Colleges and universities shifted their classes online, and online shopping soared. Business and leisure travel declined dramatically, and streaming video became the primary form of entertainment.

“The world we inhabit today is NOT the same world we inhabited at the end of 2019,” the report’s authors wrote.

Farsight Security said that the spikes in traffic could represent “denial of service (DDoS) attack traffic reflexively targeting some unrelated third-party site or sites.” However, the company also said there could be alternative explanations to explain the change in DNS traffic, such as the fact that users may simply be more active online, trying out new forms of entertainment, or developing new interests. Organizations may also be changing their services, such as moving to a content distribution network for increased capacity.

The purpose of the report was not to “attribute” or "apportion" the change in traffic levels, but to report on the “macroscopic phenomenon,” the company said.

"Having run the data, what we're seeing is more traffic in most cases, with some sites exhibiting spikes consistent with DDoS (distributed denial-of-service) attacks exploiting those sites," the report said.

The report includes plot of every domain the company analyzed, and the fact that something happened is unmistakeable. Many of the sites across industries showed a "step" pattern, indicating a significant increase in traffic volume. While some industries showed traffic spikes that could be explained by DDoS attacks, others were not so clear-cut.

For example, fewer people are making travel plans, so it would make sense that traffic to travel and transportation sites such as airlines would be low. However, many people may be hitting those sites in order to cancel pre-scheduled travel and obtain refunds or credits. The data showed more traffic, and some sites had "spikes consistent with DDoS Attacks," the report said.

The site for American Airlines, aa.com, had a significant spike on April 28: 37.3 million Start of Authority (SOA) queries for aa.com. In comparison, Farsight Security counted 54,564 SOA queries for aa.com on March 20. There were differences even within this category, as Air Asia, was fairly flat, with no spikes, and Austrian Airlines saw a small spike much earlier, in mid-March. Delta Airlines had an abrupt spike similar to American Airlines, except in early April.

The analysis uncovered at least two distinct reflective DDoS attack patterns among the sites: an attack purely associated with abusive DNS SOA queries, and another which combined abusive DNS SOA queries with abusive DNS TXT queries for wildcarded SPF redirect records.

Apple sites also saw over 18.5 million SOA queries on April 28, compared to 766,933 SOA queries on March 15. Apple sites also had a high volume of DNS TXT records, over 400 times normal levels. The majority of those TXT records were SPF-related, as Apple.com's name servers were set up with wildcards to catch and redirect random queries to _spf.apple.com

"We believe this may be getting exploited for pseudo-random subdomain DDOS attack purposes," Farsight Security said in the report.

Not every site showed a problem. Media site Forbes.com had a pretty consistent traffic volume during the first half of March, but in late-March, volume "abruptly 'steps up,'" Farsight Security said. Prior to April 1, Farsight Security counted about 61,342.7 queries a day for Forbes.com, but the counts increased by 5.5 times afterwards.

In this case, the number of DNS cache miss queries associated with login.forbes.com increased by a factor of 24 times, which suggests the site was providing more subscriber-only content and required readers to first log in. The number of DNS cache miss queries for aax.forbes.com, which seems to point to the site's online advertising platform, increased by a factor of nearly 11 times.

When the headlines are all about some new mass shooting or as in this case a virus pandemic, most of the DNS traffic related to those headlines will be due to fraudulent or criminal activity by those hoping to cash in on the public's attention," said Paul Vixie, chairman, CEO, and co-founder of Farsight Security. "Therefore, it is worth our time to study DNS traffic patterns during every global event, to characterize current abuses of the system and to predict future abuses.

]]>
<![CDATA[OpenSSH Will Deprecate SHA-1]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/openssh-will-deprecate-sha-1 https://duo.com/decipher/openssh-will-deprecate-sha-1 Wed, 27 May 2020 00:00:00 -0400

In January, a pair of researchers published details of the first practical chosen prefix collision on SHA-1, showing that the aged hash algorithm, which had already far outlived its usefulness, was now all but useless. All of the major browsers had already abandoned SHA-1, as had most of the large certificate authorities, but it is still in use in many other places, including embedded systems and some cryptography systems.

One of the more widely deployed applications that still supports SHA-1 is OpenSSH, the open source implementation of the SSH protocol that is included in a huge number of products, including Windows, macOS, many Unix systems, and several popular brands of network switches. On Wednesday, the OpenSSH developers said that a future version of the app will drop support for the use of the RSA public key algorithm, which uses SHA-1.

“It is now possible to perform chosen-prefix attacks against the SHA-1 algorithm for less than USD$50K. For this reason, we will be disabling the "ssh-rsa" public key signature algorithm by default in a near-future release,” the OpenSSH developers said in the release notes for version 8.3 on Wednesday.

“This algorithm is unfortunately still used widely despite the existence of better alternatives, being the only remaining public key signature algorithm specified by the original SSH RFCs.”

The attack that Gaetan Leurent and Thomas Peyrin published against SHA-1 is not simple and the researchers spent a couple of months on the computations necessary to produce the collision. In practical terms, the attack would allow an adversary to produce an identical SHA-1 digest for two unique files. An attacker could use this technique to produce a forged but legitimate certificate or impersonate another user by creating a duplicate PGP key. At the time the research was published, several popular open source projects still had at least partial support for SHA-1, including GnuPG and OpenSSL, in addition to OpenSSH. GnuPG implemented some countermeasures to the attack, while OpenSSL removed support for SHA-1-signed certificates.

But until now, OpenSSH had still included support for SHA-1, an algorithm that was designed 25 years ago, in an era when only governments and maybe a handful of research institutions had computers powerful enough to have any chance of breaking it. That hasn’t been the case for many years, as Leurent and Peyrin showed, using a combination of commodity gaming PCs and rented GPUs for their attack.

A few weeks after the publication of the SHA-1 collision research, the OpenSSH team indicated that it would be removing support for the algorithm, while at the same time adding support for the use of U2F harddware security keys as a second factor for authentication. That move added an extra layer of defense against credential-theft attacks and gave users more options for strong authentication.

One of the major implications of OpenSSH dropping support for RSA and SHA-1 by extension is that embedded devices that rarely, if ever, get updates and implement OpenSSH may be exposed indefinitely. Embedded Linux is a popular operating system for resource-constrained devices, and OpenSSH is a good option for remote secure login to those devices.

]]>
<![CDATA[Stolen Credentials Behind Supercomputing Attacks]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/stolen-credentials-behind-supercomputing-attacks https://duo.com/decipher/stolen-credentials-behind-supercomputing-attacks Tue, 26 May 2020 00:00:00 -0400

More than two weeks after attackers took several top academic supercomputing sites offline, one of the larger sites, ARCHER, is back online, but many of the others are still unavailable as investigators work to understand exactly what’s happened.

The attacks began hitting some of the larger supercomputers in the world on May 11, and within a few hours, the teams operating sites such as ARCHER at the University of Edinburgh, Taurus at the Technical University of Dresden, Hawk at the Stuttgart High Performance Computing Center, and the Leibniz Supercomputing Center had taken the supercomputers offline to figure out what was going on. The intrusions shared some common traits and appeared to have taken advantage of compromised credentials. But it is unclear exactly what the attackers’ goals were in targeting supercomputing sites. Although some of the incidents showed evidence of cryptomining, others did not, and there does not seem to be any indication that the attackers were trying to leverage the sites’ massive compute power for any specific task.

ARCHER, like many of the targeted sites, handles mainly academic workloads and has users spread across many industries and around the world. Many of those users also have accounts on other academic supercomputing sites, and that cross-pollination may have been one of the things that the attackers took advantage of in the string of intrusions. The team at the Leibniz Supercomputing Center at the Bavarian Academy of Sciences and Humanities found that external accounts that had been compromised were one of the causes of the intrusion there.

“The possibility of attack resulted from the combination of two circumstances: A number of compromised user accounts on external systems whose private SSH keys were configured with an empty passphrase; An error in the software that can be used to obtain administration rights after regular login,” a statement from Leibniz from May 21 says,

“It is not yet known what goals the perpetrators pursued with the attacks. We have so far found no evidence of concrete activities such as accessing or manipulating data records from regular system users.”

"It is essential to ensure that the private key on the computer from which the login is made must not be assigned an empty passphrase."

Many of the affected supercomputing sites require external users to login using SSH, and the combination of compromised accounts with no SSH passphrase gave the attackers the inroad they needed to gain access. If users who had access to several separate supercomputing sites reused their credentials on two or more of those sites, that would have been an easy leap for the attackers.

As a result of the attacks, the affected sites are resetting passwords and requiring users to have SSH configured with a passphrase. ARCHER, which came back online on May 21, is instituting those controls.

“ARCHER users will be required to use two credentials to access the service: an SSH key with a passphrase and their ARCHER password. It is imperative that you do not reuse a previously used password or SSH key with a passphrase,” the site’s status message from May 21 says.

The team at Leibniz also has invalidated all user passwords and SSH keypairs in the wake of the attack and has not yet set a date for the future availability of the supercomputer.

“All public secure shell keys stored on the HPC systems by regular users are invalidated and can therefore no longer be used for authentication. All users must therefore generate new key pairs, whereby it is essential to ensure that the private key on the computer from which the login is made must not be assigned an empty passphrase,” the Leibniz team said.

Several of the other affected sites are still offline, including Taurus and Hawk, with no specific dates set for restarting them.

]]>
<![CDATA[Two Years of GDPR Changed Privacy Landscape]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/two-years-of-gdpr-changed-privacy-landscape https://duo.com/decipher/two-years-of-gdpr-changed-privacy-landscape Tue, 26 May 2020 00:00:00 -0400

Two years may have passed since enforcement of the European Union’s data privacy regulation began, but regulators are just wrapping up the first wave of investigations. Change comes slowly in the realm of data privacy, and it is still too soon to assess the regulation’s impact or effectiveness.

The General Data Protection Regulation (GDPR) gave European regulators the authority to issue heavy fines—up to €20 million euros ($22.8 million), or up to 4 percent of the organization’s annual worldwide revenue—to organizations found violating the law. However, there have been only two major fines under GDPR over the past two years: the French data protection authority CNIL’s €50 million ($54 million) fine on Google over Android, and the United Kingdom Information Commissioner’s Office’s £183 million ($221 million) fine on British Airways.

While there have been thousands of complaints against big and small companies, there hasn’t been as many major cases against technology titans, especially those companies that operate in multiple countries. There are signs that will soon change, as Irish data regulators are expected to announce several decisions soon. The Irish DPC last week submitted a draft decision to other European data commissioners regarding one of the investigations into Twitter on whether the social media company notified the supervisory authorities quickly enough after a data breach, and whether it effectively documented the details.

“This own-volition inquiry was commenced by the DPC following receipt of a data breach notification from the controller. The draft decision focusses on whether Twitter International Company has complied with Articles 33(1) and 33(5) of the GDPR,” the DPC said in a statement.

The regulators from other countries have a month to consider the draft decision and lodge “reasoned and relevant objections” if they disagree with the DPC. Disagreements would be resolved by the European Data Protection Board (EDPB). The final decision for this case is expected this summer.

Ireland's Role

The bulk of the investigations into technology companies fall under Ireland’s jurisdiction because European rules specify that complaints are handled by the country where the companies have their European headquarters. Ireland’s privacy watchdog, the Data Protection Commission, is currently juggling 23 investigations into Apple, Facebook, Google, LinkedIn, Tinder, Twitter, and Verizon. There are two investigations into Apple, eight into Facebook, two into Google, one into Instagram (owned by Facebook), three into Twitter, and two into WhatsApp (owned by Facebook).

Irish regulators have sent a preliminary draft decision to WhatsApp, which gives the company a chance to provide additional information for the regulators to consider before coming to a decision on whether WhatsApp was being transparent around what information is shared with parent company Facebook. DPC also said it had completed its inquiries into how Facebook processes personal data (the complaint was filed in May 2018) and was in the process of making a decision. And finally, the commision sent draft inquiry reports to all parties involved in two other cases with WhatsApp and Instagram.

"In addition to submitting this draft decision to other EU supervisory authorities, we have this week sent a preliminary draft decision to WhatsApp Ireland Limited for their final submissions which will be taken in to account by the DPC before preparing a draft decision in that matter also," Deputy Commissioner Graham Doyle said.

Antsy About GDPR

The news that Ireland is moving forward with some of the investigations is a welcome one, especially with GDPR's second anniversary prompting some activists, business leaders, and regulators to wonder about its success in regard to improving consumer privacy. Privacy activist Max Schrems, the honorary chair of advocacy group nyob, criticized the DPC for not issuing “a single fine under the GDPR against a private actor, despite reporting 7,215 complaints in 2019” in an open letter to EU data regulators. The French CNIL took seven months to fine Google over how Android handled data for behavioral advertising being transparent, and DPC was still months away from a final decision in any of the cases against technology companies, Schrems said. “After two years, we feel that the time has come to shine light on the shortcomings of GDPR enforcement as we experience in Ireland and trigger a public debate,” he wrote.

Ireland’s DPC said “procedural queries” had delayed decisions on some of these cases, which was why the investigations were moving so slowly.

The letter from Schrems doesn’t address the fact that Ireland’s DPC has to shoulder a heavy workload because of the sheer number of technology companies headquartered in Ireland. The DPC is also woefully underfunded and understaffed: the 2020 budget is only €16.9 million ($18.5 million), compared to the UK ICO’s €61 million ($66.8 million) and French CNIL’s €20.8 million ($22.8 million). The Irish commissioner, Helen Dixon, said she was “disappointed” the government had allocated “less than one third of the funding” the DPC had requested.

“Europe’s GDPR enforcers do not have the capacity to investigate Big Tech,” was the conclusion Brave, a privacy-focused web browser, drew after analyzing the budgets of various European data protection authorities.

“If the GDPR is at risk of failing, the fault lies with national governments, not with the data protection authorities,” said Johnny Ryan, Brave’s chief policy & industry relations officer. “Robust, adversarial enforcement is essential.”

Ireland isn’t the only one with a small budget investigating tech firms. The authorities in the Netherlands are still investigating Netflix, and Luxembourg has yet to issue a single enforcement notice against Amazon and Paypal. Luxembourg’s watchdog agency has a €5.5 million ($6 million) budget, and just 43 employees.

A report from Access Now said European data protection authorities can’t effectively enforce the regulations due to a lack of resources, tight budgets, and administrative challenges. The number of data protection staff have not increased significantly, and most countries said they didn’t have sufficient resources.

“Companies could leverage DPA’s lack of resources, using it to get around the application of the GDPR, or at least significantly delay its effect,” Access Now warned in the report.

The European Commission’s progress report on GDPR is expected in June. While there are many who feel that the slow pace of enforcement means the regulation is due for reform, the European Commission is more likely to reiterate that GDPR is supposed to be a journey, and not a quick fix. It takes time to establish procedures for investigations, enforcement mechanisms, and figuring out how the appeals process would work. The last thing regulators would want is to overlook something during the investigation which could result in decisions being overturned during the appeals.

“The GDPR has changed the landscape in Europe and beyond. Nonetheless, compliance is a dynamic process and does not happen overnight,” Věra Jourová, European Commission’s vice-president for values and transparency, and Didier Reynders, the commission’s Commissioner for Justice, said in a statement marking the anniversary.

Changed Landscape

Regardless of how actual enforcement has been under GDPR, Europe's data privacy law has changed the conversation within governments around the world and for all businesses. While its effects are specifically for Europe, it is being used as a blueprint for other countries as they develop their own privacy laws.Countries around the world—such as Argentina, Brazil, Chile, India, Japan, Kenya, and South Korea, to name a handful—have some variation of the law on the books. While the United States still doesn’t have a federal law, several states have started the process to carve out their data privacy regulations.

From a business standpoint, GDPR is about compliance, but it has also forced businesses to “become more aware of the importance of data protection,” Jourová and Reynders said. They can't just skip over the questions or the requirements in the rush to get to market. It would be hard for an organization to claim they cannot comply with GDPR at this point, as they had two years after the law went into effect to figure out needed to be done, and two years of enforcement to refine their data storage, use, and collection processes.

While fines happen to be the most obvious way GDPR can force organizations to be careful with consumer data, it isn't the only tool available to regulators. In fact, fines are the easiest part of the enforcement. Google’s 50 million fine is a minuscule fraction of its annual revenue. But if the regulator decides companies have to change their business models, to temporarily or permanently stop data collection, that is a significant business disruption. Article 5 of the GDPR stipulates that companies cannot use the data for anything other than the purpose for which it was originally collected and if regulators decide to block certain products or services, the companies would have to make significant changes to their products.

People outside Europe have benefited from the privacy protections because companies realized that it didn't make sense to maintain separate privacy policies and procedures based on the user's country of residence.

“Within two years, these rules have not only shaped the way we deal with our personal data in Europe, but has also become a reference point at global level on privacy,” Jourová and Reynders said.

]]>
<![CDATA[Hacker Allegedly Connected to Collection 1 Credential Dump Arrested]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/hacker-allegedly-connected-to-huge-collection-1-credential-dump-arrested https://duo.com/decipher/hacker-allegedly-connected-to-huge-collection-1-credential-dump-arrested Thu, 21 May 2020 00:00:00 -0400

Authorities in Ukraine have arrested a man they allege to be the hacker named Sanix, who is responsible for putting a massive database of 773 million email addresses and 21 million passwords up for sale last year.

The data was not the haul from a single intrusion at a huge retailer or bank, but was kind of a hodgepodge collected from various other breaches over the course of the last few years and munged together. Known as Collection 1, the database was posted for sale in January 2019 and researchers quickly dug into the database and discovered that much of the information came from known data breaches and was in fact legitimate. The data was soo removed from the forum where it had been posted originally, but it continued to circulate in other places.

This week, the Security Service of Ukraine said it had gotten information that Sanix was a Ukrainian citizen and the service had monitored sales of portions of the database. The service arrested a suspect it alleges is Sanix this week.

“Experts have found that the 87 gigabyte database put up for sale by the hacker is only a small part of the total amount of data he has seized. The hacker had at least seven similar databases of stolen and broken passwords, the total amount of which reached almost terabytes. These included personal, including financial, data from residents of the European Union and North America,” the service said in a statement.

“SBU cyber specialists recorded the sale of databases with logins and passwords to e-mail boxes, PIN codes for bank cards, e-wallets of cryptocurrencies, PayPal accounts, information about computers hacked for further use in botnets and for organizing DDoS attacks.”

The security service said it seized computers and other equipment with two terabytes of allegedly stolen data, along with phones and cash.

The arrest comes two weeks after authorities in Poland and Switzerland arrested a number of people who also were allegedly selling access to large collections of stolen credentials. That group, known as InifinityBlack, was also known to develop and sell hacking tools, and was involved in fraud schemes tied to loyalty programs. Five people were arrested in several locations in Poland as part of the operation.

“The hacking group’s main source of revenue came from stealing loyalty scheme login credentials and selling them on to other, less technical criminal gangs. These gangs would then exchange the loyalty points for expensive electronic devices,” the statement from Europol, which assisted in the InfinityBlack investigation, says.

“The hackers created a sophisticated script to gain access to a large number of Swiss customer accounts. Although the losses are estimated at €50 000, hackers had access to accounts with potential losses of more than €610 000. The fraudsters and hackers, among them minors and young adults, were unmasked when using the stolen data in shops in Switzerland.”

]]>
<![CDATA[Most Applications Contain Vulnerable Open Source Libraries]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/most-applications-contain-vulnerable-open-source-libraries https://duo.com/decipher/most-applications-contain-vulnerable-open-source-libraries Thu, 21 May 2020 00:00:00 -0400

Modern software development relies on open source libraries, even for those applications that are sold commercially and aren’t open source. However, developers may not always be aware how these components are introducing vulnerabilities into their code.

Seven in 10 applications use at least one open source library with a security flaw, which makes those applications vulnerable, Veracode said in its latest State of Software Security: Open Source Edition report. The report analyzed 351,000 unique open source libraries across Veracode’s platform database of 85,000 applications. A single flaw in one library can cascade to all applications using that component. The security debt becomes even higher when the vulnerable component is not called directly by the application, but by some other library.

“An application’s attack surface is not limited to its own code and the code of explicitly included libraries, because those libraries have their own dependencies,” said Chris Eng, Veracode’s chief research officer.

In most cases, the vulnerable libraries wind up in the applications indirectly, as 47 percent of the open source libraries with at least one vulnerability were “transitive” dependencies, Veracode said. Transitive dependencies refers to situations where a library relies on code from other libraries. A developer may explicitly include only one library, but if that library includes another, and that one also pulls in code from yet another library, the code the developer is writing winds up having three dependencies, not just one. As applications get more complex, the number of dependencies the developer has to manage grows pretty quickly.

"An application that picks up most of its dependencies via second, third, or even greater degrees of separation from a developer's explicit instruction increases the difficulty of managing those dependencies," the report said.

This is why it is harder for developers to stay on top of making sure they are using the most up-to-date versions of open source libraries. They can keep track of the ones they are using directly, but they often have to trust that the upstream library maintainers are managing the other dependencies.

"All of this imported code represents functionality that your developers did not author, but becomes code you have to manage," the report said.

Vulnerable And Outdated

Veracode’s findings echo the recent Open Source Security and Risk Analysis report from Synopsys which found that 99 percent of codebases contain at least some open source code and 75 percent used at least one vulnerable open source component. Synopsys found that 49 percent of codebases it analyzed had at least one component with a high-risk vulnerability. Synopsys audited 1,253 applications and assessed open source codebases from 20,000 sources for the OSSRA.

About 90 percent of applications used at least one open source component that was out-of-date by four or more years, or was abandoned, with at least two years of no development activity, Synopsys found. Out-of-date or abandoned components are even more likely to have unfixed security vulnerabilities.

The average application included 445 open source components, Synopsys found—which was far higher than the average number of components reported by Veracode. In an average application, 70 percent of the codebase was open source. Both reports agreed that applications are using a lot of open source components.

"The 2020 OSSRA report highlights how organizations continue to struggle to effectively track and manage their open source risk,” said Tim Mackey, principal security strategist of the Synopsys Cybersecurity Research Center. “Maintaining an accurate inventory of third-party software components, including open source dependencies, and keeping it up to date is a key starting point to address application risk on multiple levels.”

The OSSRA, like Veracode’s State of Software Security report, looked at both open source and commercial applications that incorporate open source components. Synopsys also found that some open source components were widely used. For example, 55 percent of applications analyzed by Synopsys used jQuery, 40 percent used Bootstrap, 31 percent used Font Awesome, and 30 percent used Lodash.

Veracode ran a similar analysis, and found that every language had a set of libraries that were used in over 75 percent of applications. For example, the x/net networking package for Go appeared in 52 percent of the applications, and the most popular JavaScript library, inherits, was used in 92.3 percent of the applications. The four-line isarray package is the seventh most popular JavaScript library and found in 86.2 percent of the applications. This isn't necessarily a bad thing, if the libraries themselves are safe to use. However, the second and third most popular JavaScript libraries, debug and ms, used in 89 percent of the applications, both have known denial-of-service vulnerabilities.

Packages "implementing trivial functionality [_ms_ converts time into milliseconds] can have flaws, and may exist deep in a dependency tree," Veracode said.

Even if the developer knows to use the most recent version of the libraries, if any of the other libraries included in the application pull code from older versions of these two components, the denial of service issues becomes part of the application.

Language Differences

When it comes to vulnerabilities in components, all programming languages and frameworks are not created equal. More than 80 percent of applications written in JavaScript, PHP, and Ruby import more than two-third of their libraries through other libraries, while less than 10 percent of .NET, Swift, and Go applications rely on transitive dependencies, Veracode found. Whether or not the languages and frameworks are more likely to use transitive or direct dependencies has nothing to do with which language or framework is more secure, or “better.” Developers just need to keep in mind the characteristics of the framework or language they are using.

"JavaScript, Ruby, PHP, and Java have most of their attack surface from transitive inclusions that developers need to ensure they are managing," the report said.

Swift has the highest density of flaws, but has an overall low percentage of flawed libraries, Veracode found. Go, on the other hand, has a high percentage of libraries with flaws but has low density, meaning the individual library has an overall low number of flaws.

The typical PHP applications doesn’t import a lot of libraries, an average of 34 components, but a greater share of PHP libraries have at least one security vulnerability. PHP has more flawed libraries and a high density of flaws (but not as high as Swift), meaning the individual library also has a lot of flaws. Including any given PHP library has a greater than 50 percent of chance of introducing a security flaw into the code, Veracode found.

In contrast, a relatively low number of JavaScript components have some kind of vulnerability, possibly because so many JavaScript components are very short, with a few lines of code dedicated to doing one specific thing. This means, however, that a typical JavaScript application imports an average of 377 libraries, which increases the probability of including at least one vulnerable component somewhere along the way.

Types of Vulnerabilities

Veracode mapped the vulnerabilities in the open source libraries against the Top 10 list of software vulnerabilities maintained by the Open Web Application Security Project to determine if some types of vulnerabilities were more prevalent in open source components than others. Cross-Site Scripting (XSS), insecure deserialization, and broken access control vulnerabilities made up "a substantial portion" of the flaws in open source components. Veracode found that 29.1 percent of the libraries had at least one cross-site scripting flaw, 23.5 percent had insecure deserialization issues, and 20.3 percent had problems with broken access control.

The language differences were evident here, as well, as more than 40 percent of PHP libraries had cross-site scripting issues. PHP also had more broken access control and authentication problems than in any other language.

Insecure deserialization issues were common only in PHP and Java libraries, and broken access control flaws were bigger problems for .NET and Go libraries than XSS. While XSS issues were prevalent across all languages, they posed less of a problem than insecure deserialization and broken access control because most XSS issues were not exploitable.

Developers also can’t just look at CVEs (Common Vulnerabilities and Exposures) to keep track of vulnerable libraries because not all vulnerabilities get assigned CVEs. More than 60 percent of vulnerable JavaScript libraries have security flaws without corresponding CVEs.

Synopsys also had similar findings, noting that of the ten most common vulnerabilities it identified in open source components, four did not have a CVE assigned. The top three most common vulnerabilities were found in 37 percent of the applications analyzed. The fourth most common was the first one with a CVE (CVE-2019-11358), and is a vulnerability in jQuery.

Update Those Libraries

Most, or 74 percent, of the applications with vulnerable libraries can be fixed by just updating the libraries, Veracode said. In fact, 71 percent of the applications would even need a major update since a minor version update would fix the issues.

Most of the libraries with vulnerabilities from the Top 10 list also have updates available, as did 91 percent of flaws with public proof of concept exploits already have a fix available. Considering that attackers continue to target older vulnerabilities to attack systems, updating these components could significantly reduce the attack surface.

"Open source software gives companies tremendous advantages, but there's no free lunch here, and all code must be managed to avoid your own contributions — whether open or closed source in nature — from exposing your users to vulnerabilities," Veracode stated in the report.

]]>
<![CDATA[Decipher Podcast: Ping Look]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/decipher-podcast-ping-look https://duo.com/decipher/decipher-podcast-ping-look Wed, 20 May 2020 00:00:00 -0400

]]>
<![CDATA[Google Makes DNS Over HTTPS Default in Chrome]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/google-makes-dns-over-https-default-in-chrome https://duo.com/decipher/google-makes-dns-over-https-default-in-chrome Wed, 20 May 2020 00:00:00 -0400

With the release of Chrome 83 this week, Google has introduced a new Secure DNS feature that implements DNS over HTTPS, ensuring that users’ DNS queries are encrypted from the browser to the DNS provider.

Turning on DNS over HTTPS (DoH) in the browser gives users a key level of protection against network-level surveillance of their online activities. Under normal circumstances, the queries that an individual sends to her DNS provider are sent in plaintext and are therefore readable by the provider itself and any party that might have privileged access to the network traffic. For most individuals, their DNS provider is their ISP, so using unencrypted DNS links allows the ISP to get a very clear picture of any user’s activities. DoH sends those queries over an HTTPS connection instead, protecting them from eavesdropping.

Google has been working on this feature in Chrome for quite a while, as has Mozilla, which began rolling out DoH in Firefox in February.

“The introduction of DNS-over-HTTPS gives the whole ecosystem a rare opportunity to start from a clean and dependable slate, making it easier to pursue further enhancements relying on DNS as a delivery mechanism. Thus far, the unencrypted nature of DNS has meant that features that extend DNS could randomly fail due to causes such as network equipment that may drop or modify newly introduced DNS fields,” Kenji Baheux, Chrome product manager, said in a post on the new feature.

“As DNS-over-HTTPS grows, it will put this concern aside because it benefits from the aforementioned HTTPS properties and sets a new reliable baseline to build upon.”

The introduction of DoH will be a big boon for most individual users, but for enterprises the situation will likely be quite different. Many enterprises that use Chrome as the default browser do so in a managed environment, meaning that administrators have control over what versions employees use and what extensions they can install, for example. But many enterprises also use security products that perform outbound traffic inspection to look for connections to malicious domains or prohibited content, something that won’t be possible with DoH. As a result, Google has added a feature that enables Chrome to disable DoH in enterprise environments.

“If you are an IT administrator, Chrome will disable Secure DNS if it detects a managed environment via the presence of one or more enterprise policies. We’ve also added new DNS-over-HTTPS enterprise policies to allow for a managed configuration of Secure DNS and encourage IT administrators to look into deploying DNS-over-HTTPS for their users,” Baheux said.

Not all DNS providers support DoH right now, but Chrome contains a list of providers that do and will automatically try to keep a user’s provider the same if the provider offers DoH.

“By keeping the user’s chosen provider, we can preserve any extra services offered by the DNS service provider, such as family-safe filtering, and therefore avoid breaking user expectations. Furthermore, if there’s any hiccup with the DNS-over-HTTPS connection, Chrome will fall back to the regular DNS service of the user’s current provider by default, in order to avoid any disruption, while periodically retrying to secure the DNS communication,” Baheux said.

]]>
<![CDATA[Attacks Based on Credential Theft On The Rise, DBIR Says]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/attacks-based-on-credential-theft-on-the-rise-dbir-says https://duo.com/decipher/attacks-based-on-credential-theft-on-the-rise-dbir-says Tue, 19 May 2020 00:00:00 -0400

There’s something for everyone in Verizon’s Data Breach Investigations Report. Hacking remains the primary attack method, and the use of malware is declining. While external attackers perpetuated the majority of the data breaches in the report, the industry breakdown shows that insider threats accounted for half of the incidents in healthcare. Money continues to be the primary motivator for these attacks, and data breaches as a result of misconfigured systems are on the rise.

Attackers are more likely to use stolen or lost credentials than malware in their attacks, Verizon said in its Data Breach Investigations Report. Malware infections dropped from almost half of all security breaches in 2016 to 22 percent, which is the lowest it has been in all the years the DBIR team has been tracking this statistic. Hacking—which includes brute-forcing passwords—remains the primary attack method at 45 percent, and social attacks—including social engineering—was found in 22 percent of the reported incidents.

In its 13th year, the DBIR research team analyzed incidents Verizon’s incident response team investigated as well as reports received from 81 contributing organizations from 81 countries. Out of the 157,525 reports analyzed, 32,002 met the team’s definition of an incident, and 3,950 were confirmed breaches. The report differentiates between an incident, which is a security event that “compromises the integrity, confidentiality or availability of an information asset,” and a breach, which is an incident where data is confirmed to have been exposed to an unauthorized party. In the case of an incident where data could potentially have been exposed, but hasn’t been confirmed that it was, that incident would not be considered a breach under DBIR’s definitions.

Of the breaches that was the result of hacking as the primary method, 80 percent involved brute-force or using lost or stolen credentials, the report said. Attackers are sitting on a collection of billions of credentials amassed from past data breaches and leaked password lists over the past few years.

“We think that other attack types such as hacking and social breaches benefit from the theft of credentials, which makes it no longer necessary to add malware in order to maintain persistence,” said researchers.

As far as the criminals are concerned, there is little to no risk in reusing the list of stolen credentials, and plenty to gain when one set of credentials actually works, said Bob Rudis, chief data scientist at Rapid7. This type of attack will continue to be used until organizations become more consistent about implementing multi-factor authentication across the board.

“Zombie credentials never die, they just get re-used in every gosh darn attack,” Rudis said.

Among social attacks, phishing was the top method. Those credentials may have been stolen using phishing—which was the most common form of social attack—or exposed at an earlier time in a different attack. The report also found that click rates on phishing rates are mere 3.4 percent, proving, yet again, that criminals don’t need a lot of victims to be successful with their phishing campaigns. They just need one person to let them into the network and they can use other techniques to move around and get what they need.

Lost or stolen credentials are much easier to obtain, but there is another reason why they are becoming more commonly used than malware. It is easier to bypass antivirus programs and other security software if the entry point looks like a benign user login. In cases where malware is used, the most common forms are password dumpers, data captures, and ransomware. Office documents and Windows applications are still preferred for malware.

The report suggested some improvements in overall enterprise patching strategy, noting that very few of the incidents in the report were due to a lack of patching or a missed patch. While there are lots of vulnerabilities discovered, "a relatively small percentage of them are used in breaches," the researchers wrote. While exploiting vulnerabilities remain the second common attack method for attacks that used hacking (as opposed to malware or social attacks), vulnerability exploits have not been a major cause of security incidents within the last five years, the report found.

"In our security information and event management (SIEM) dataset, most organizations had 2.5 percent or less of alerts involving exploitation of a vulnerability," the report found.

More companies appear to be patching their IT assets in a timely manner, which is helping to prevent attackers from exploiting known software vulnerabilities. The researchers determined this by looking at organizations with the Eternal Blue vulnerability present on their systems and those without. They found that systems that were vulnerable to Eternal Blue were also vulnerable to everything from the last decade or two. And organizations that patch seem to be able to "maintain a good, prioritized, patch management regime," the report said.

"Once again, no, each new vulnerability is not making you that much more vulnerable," the report said.

Industry Differences

External attackers were responsible for the majority of the data thefts the research team analyzed. In general, external attacks made up 70 percent of the data thefts, compared to 30 percent for insider threats. Over the past few years, Verizon’s researchers have noted the increase in insider threats, and in this year’s report, the team said the uptick is more likely due to increased detection and reporting, rather than any increase in the number of employees acting out of actual malice.

“It is a widely held opinion that insiders are the biggest threat to an organization’s security, but one that we believe to be erroneous.”

The industry snapshot is one of the most valuable parts of the DBIR, as the researchers map the overall trends against each industry sector. In the case of insider threats, the healthcare sector has a bigger problem than others, as the split is nearly 50 percent between external and insider attacks. Malicious insiders in healthcare include simple human error, and employee misuse, such as medical workers accessing patient records out of curiosity or entertainment.

The attack methods also differ by industry. The most common attack method against hospitality and food services used to be related to point-of-sale systems. Now the sector is more likely to be targeted by malware and web application attacks. Web application attacks were also a big problem for financial and insurance industries. Credential theft was the most pervasive in retail.

Ransomware is the top threat for the education space, as 80 percent of malware infections in the sector involved some kind of ransomware. Phishing attacks were used in 28 percent of breaches in the educational services industry, and 23 percent of the breaches were the result of lost or stolen credentials.

In previous years, misusing privileges was a big problem for the healthcare industry. Last year’s report saw 23 percent of attacks involving privilege misuse, compared to this year’s mere 8.7 percent.

“This year, we saw a substantial increase in the number of breaches and incidents reported in our overall dataset, and that rise is reflected within the Healthcare vertical,” said researchers. “In fact, the number of confirmed data breaches in this sector came in at 521 versus the 304 in last year’s report.”

Exposed Servers

Over the past year, there have been numerous reports of researchers uncovering misconfigured cloud servers or other systems, exposing data to anyone who happened to look. These incidents are now showing up in the DBIR, and misconfigured, or poorly configured systems make up more of the incidents in the DBIR than in previous years. Misconfiguration (17 percent) and data misdelivery (8 percent) combined exceeded the proportion of malware attacks in this year's data set. Most of these systems have been found by security researchers and unrelated third-party entities.

The report found that misconfigured systems are ubiquitous across industries. It isn't as if some sectors are more pront to make these mistakes than others.

“Errors definitely win the award for best supporting action this year,” the report said, noting they are “equally as common” as social breaches and more common than malware. Hacking is higher, but that is related to credential theft.

“It is no real surprise that naked S3 buckets and wide-open databases received a significant mention in the DBIR,” Rudis said, noting that the team finds “millions” of SMB servers, databases, and other inappropriately exposed services during its Project Sonar scans. While it may be the case that the number of systems being misconfigured isn’t necessarily on the rise, but just that people are getting better at finding them, organizations need to change existing processes to reduce configuration errors.

“Organizations must implement stronger controls and have finely honed practices and playbooks for deploying services safely,” Rudis said.

Faster Detections

An interesting silver lining in the DBIR: the number of copanies discovering incidents in days or hours continues to go up. Other industry reports have also recently highlighted this downward trend, although perhaps not to the extent shown in this report. One of the reasons for DBIR's more positive outlook may have to do with the fact that the report includes reports from managed service providers. A quarter of the organizations in the report still took months to detect the breach.

“All in all, we do like to think that there has been an improvement in detection and response over the past year,” said researchers.

The fact that breaches are detected sooner than before “matters little to ransomware attackers,” Rudis said. The criminals behind ransomware want the victims to know right away that the system has been compromised because they want to get paid.

“We need to keep improving this statistic, but also need to work even harder on preventing phishing attacks and shoring up internal configurations,” Rudis said.

Denial-of-service attacks spiked over the past year, while cyber-espionage campaigns are trending downwards. DoS attacks are increasingly showing up in the cybercriminal toolbox, as they made up 40 percent of security incidents reported. It’s not just the number of attacks that are increasing, but the severity of the attack. Bits per second, tells the size of the attacks, and packets per second tells the throughway of the attack. This is consistent with the various DoS botnets that have popped up recently, such as Kaiji and Mirai variants.

The DBIR confirms that most successful breaches were opportunistic attacks, said Tim Mackey, principal security strategist at Synopsys CyRC. “This means that we could see a material reduction in breaches if basic principles such as securing S3 buckets, applying password security to databases, having a patch management strategy and applying reasonable malware protections were in place,” Mackey said.

]]>
<![CDATA[Supercomputer Sites Still Struggling After Attacks]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/supercomputer-sites-still-struggling-after-attacks https://duo.com/decipher/supercomputer-sites-still-struggling-after-attacks Mon, 18 May 2020 00:00:00 -0400

The string of intrusions at supercomputing sites in Europe and the United States that began last week involves attackers using compromised machines in networks in China and Poland to connect to target supercomputers, where they then use stolen credentials and SSH keys to log in and move from one instance to another.

The attacks have hit several supercomputing instances in Germany, one in Scotland, and at least one in the U.S., and most of the affected sites have been taken offline as a result. The incidents share a number of characteristics, including the presence of a pair of files on compromised machines, the use of SSH credentials to move between machines, and the involvement of IP addresses in China. But there appear to be two separate clusters of attacks, one that is focused on installing cryptomining software and another that is strictly installing a backdoor.

The cryptomining attacks are designed to harness the enormous compute power of the compromised supercomputers and add them to a Monero mining pool. The attackers are assigning specific roles to compromised machines and using some as proxies and others as simple miners, according to an analysis of the compromises by the European Grid Infrastructure’s incident response team.

“A malicious group is currently targeting academic data centers for CPU mining purposes. The attacker is hopping from one victim to another using compromised SSH credentials. Connections to the SOCKS proxy hosts are typically done via TOR or compromised hosts. The attackers uses different techniques to hide the malicious activity, including a malicious Linux Kernel Module,” the analysis says.

“It is not fully understood how SSH credentials are stolen, although some (but not all) victims have discovered compromised SSH binaries. At least in one case, the malicious XMR activity is configured (CRON) to operate only during night times to avoid detection. There are victims in China, Europe and North America.”

One of the supercomputers affected by the recent attacks, ARCHER at the University of Edinburgh, has been offline for a week now as the team has been trying to diagnose the issue and repair the damage.

“As previously mentioned, all of the existing ARCHER passwords and SSH keys will be rewritten and will no longer be valid on ARCHER,” the team said in a status message on May 15.

Researchers have been tracing the attacks, looking at patterns, and reverse engineering the malware installed by the attackers. In the attacks that are not focused on cryptomining, the intrusions typically involve the installation of two files: a loader with root privileges and a log cleaner. Both of the files typically are installed in the /etc/fonts directory on compromised machines and samples of them have been uploaded to the VirusTotal service. The EGI incident response team identified a number of individual IP addresses at Shanghai Jiaotong University in China and one in Poland likely associated with compromised computers that the attackers have been using as hosts to login to the target supercomputers over SSH.

]]>
<![CDATA[Attacks Knock Supercomputing Sites Offline]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/attacks-knock-supercomputing-sites-offline https://duo.com/decipher/attacks-knock-supercomputing-sites-offline Fri, 15 May 2020 00:00:00 -0400

On May 11, ARCHER, a supercomputing service hosted by the University of Edinburgh and used by science and medical researchers across the UK and around the world, was hit by an attack that has kept the service offline for the better part of five days. The incident is one of several intrusions that have affected high-performance computing centers across Europe in recent days, some of which share some key characteristics.

ARCHER is a service run on a Cray XC30 supercomputer and it supports resource-intensive research projects in a number of different fields, including biomedical, physics, bioscience, and climate. The service allows external connections from users in remote locations and supports research from both academic and industrial researchers.

On Monday afternoon, the ARCHER team posted a status message saying that the service had been taken offline because of a “security exploitation”. There were no more details about the nature of the incident, and the service remained unavailable for the next two days as the team continued to investigate what had happened. By Wednesday, the team was advising users to change their passwords and the SSH keys associated with their accounts, an indication that perhaps the intrusion was the result of compromised credentials.

More ominously, the ARCHER team indicated that their incident was likely part of a larger rash of intrusions at high performance computing labs.

“We now believe this to be a major issue across the academic community as several computers have been compromised in the UK and elsewhere in Europe. We have been working with the National Cyber Security Centre (NCSC) and Cray/HPE in order to better understand the position and plan effective remedies,” the ARCHER status message from Wednesday said.

Among the other high performance computing sites that have been affected by similar attacks are several in Germany and one in Switzerland, according to a report by Der Spiegel, the German magazine. Most of the affected sites have status messages telling users that the service is unavailable for the time being because of a security incident. The Leibniz Supercomputing Center, one of the top 10 largest supercomputing sites in the world, is among the installations affected.

“We can confirm a security incident that affects our high-performance computers. For safety's sake, we have therefore isolated the affected machines from the outside world. The users and the responsible authorities have been informed,” the status message for the Leibniz Supercomputing Center says.

Other sites that are unavailable at the moment include the Hawk service at the Stuttgart High Performance Computing Center, Taurus at the Technical University of Dresden, and three separate services at the Jülich Supercomputing Center.

"The ARCHER incident is part of a much broader issue involving many other sites in the UK and internationally."

The attacks on ARCHER and the other high performance computing labs around Europe come at a time when both academic and industrial research teams are working frantically to analyze and develop vaccines for COVID-19. That is resource intensive work that requires the kind of massive compute power possessed by supercomputers. Research on treatments and vaccines is ongoing in many countries and researchers are collaborating, but at the same time, attackers are targeting institutions and facilities involved in the effort. On Wednesday, the FBI and the Cybersecurity and Infrastructure Security Agency (CISA) issued a warning that adversaries affiliated with the Chinese government had been running operations against some organizations involved in COVID-19 research.

“The FBI is investigating the targeting and compromise of U.S. organizations conducting COVID-19-related research by PRC- affiliated cyber actors and non-traditional collectors. These actors have been observed attempting to identify and illicitly obtain valuable intellectual property (IP) and public health data related to vaccines, treatments, and testing from networks and personnel affiliated with COVID-19-related research,” the advisory says.

“The potential theft of this information jeopardizes the delivery of secure, effective, and efficient treatment options.”

As of Friday, the ARCHER service was still offline, and the team at the University of Edinburgh that’s responsible for its operation said that the investigation was still going on, with the assistance of the NCSC.

“As you may be aware, the ARCHER incident is part of a much broader issue involving many other sites in the UK and internationally. We are continuing to work with the National Cyber Security Centre (NCSC) and Cray/HPE and further diagnostic scans are taking place on the system,” the latest status update from Thursday says.

“We are hoping to return ARCHER back to service early next week but this will depend on the results of the diagnostic scans taking place and further discussions with NCSC.”

]]>
<![CDATA[Microsoft's RDP Patch Isn't a Complete Fix]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/microsofts-rdp-patch-isnt-a-complete-fix https://duo.com/decipher/microsofts-rdp-patch-isnt-a-complete-fix Fri, 15 May 2020 00:00:00 -0400

Companies like Microsoft have made significant investments in their abilities to respond to vulnerability reports, fix the issues, and regularly release security updates. Despite their best efforts, patches are sometime incomplete, and they have to go back and fix the same flaw again.

That appears to be the case for a security vulnerability in Remote Desktop Protocol (CVE 2020-0655) that Microsoft fixed back in February. The update fixed the problem in the built-in Windows RDP client, but not the underlying issue in the application programming interface (API), said Check Point, the company which discovered the original vulnerability (originally tracked as CVE-2019-0887). While the update addresses the flaw in the Windows client, third-party RDP clients that rely on the API function “PathCchCanonicalize” remain vulnerable.

Attackers can still access sensitive information, modify critical files, steal password files, expose application source code, and other malicious activities. Check Point said.

The original vulnerability, which Check Point reported last summer, could be exploited to trigger a “reverse RDP attack,” where an attacker with control over the RDP server could manipulate the RDP client. Typically, RDP is used so that someone can access a remote Windows machine remotely and perform actions on the server. In this case, the process was reversed, so that if an attacker could trick a victim into connecting to a remote server over RDP, the attacker could access, read, and manipulate files they normally wouldn’t be able to access.

If an IT staff member tried to connect to a remote corporate computer that was infected by malware, the malware would be able to follow the RDP connectoin back to attack the IT staff member's computer, Check Point said.

Microsoft fixed the flaw in the RDP client by adding a workaround in Windows, but left the “PathCchCanonicalize” function unchanged, Check Point said. The API function is used to sanitize file paths, to make sure that user-provided inputs are properly formatted and valid. The researchers found that the function could be bypassed if the attacker used the forward slash in the file path rather than a backwards slash. This meant attackers could carry out path traversal attacks, where the attacker could save the file into any location on the victim machine because the program accepted the file without first verifying it.

In essence, it meant that an attacker could use the shared RDP clipboard to send files to an arbitrary location on the victim machine and remotely execute those files.

"In CVE-2020-0655, Microsoft addressed the '\' issue independently in the RDP handling code, without fixing the PathCchCanonicalize function," Check Point said.

Check Point discovered this when testing the RDP client for MacOS. Other third-party RDP clients that rely on Microsoft’s API function are vulnerable because the attacker can bypass the code that sanitizes and validates file paths.

“The simple replacement of \ to / in our malicious RDP server was enough to bypass Microsoft’s patch!” Check Point said.

The update itself is effective in addressing the vulnerability as it relates to Microsoft's built-in RDP client, but IT staff should be aware that the other RDP clients may be impacted. Developers of these clients should fix their applications manually since the filepaths are not being sanitized properly at this time.

"We want developers to be aware of this threat, so that they could go over their programs and manually apply a patch against it," Check Point said.

The February patch is actually Microsoft’s second attempt at addressing the vulnerability. The original security update was released last July, and Microsoft followed up with another update in February after researchers found that they were able to bypass the patch. Check Point said it had notified Microsoft about the latest issues with the patch, but had not yet received a response.

“Not only can we bypass Microsoft’s patch, we can bypass any path canonicalization check performed according to Microsoft’s best practice,” Check Point said.

]]>
<![CDATA[Stuxnet's Legacy Lives on in New Windows Bug]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/stuxnets-legacy-lives-on-in-new-windows-bug https://duo.com/decipher/stuxnets-legacy-lives-on-in-new-windows-bug Wed, 13 May 2020 00:00:00 -0400

Every month Microsoft releases patches for a wide range of vulnerabilities, some that are quite serious and some that are less so. But few Patch Tuesdays bring a fix for a vulnerability that has the history, lore, and ease of exploitation of the flaw in the Windows Print Spooler that was disclosed yesterday.

The vulnerability (CVE-2020-1048) is not a super complex remote code execution bug buried deep within the guts of Windows, but is instead a humble elevation of privilege flaw sitting in a spot that has not seen too much attention from researchers over the years. At least not publicly. The bug affects many recent versions of Windows, including Windows Server 2008, 2012, 2016, and 2019, as well as Windows 7, 8.1, and 10.

“An elevation of privilege vulnerability exists when the Windows Print Spooler service improperly allows arbitrary writing to the file system. An attacker who successfully exploited this vulnerability could run arbitrary code with elevated system privileges. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights,” the Microsoft advisory says.

The Windows print spooler is a service in the operating system that manages the printing process. The service has been in Windows for quite a long time and has not evolved much over the years. It handles the back-end functions of finding and loading the print driver, creating print jobs, and then ultimately printing them. It’s not typically the type of service that would draw much attention from researchers or attackers, but at least one team spent considerable time digging into it about a decade ago: the Stuxnet team.

The Stuxnet worm that hit several nuclear facilities in Iran in 2010 and later spread to Windows PCs in many networks around the world used an exploit for a similar vulnerability in the print spooler service. That flaw was a zero day at the time that Stuxnet was discovered and was one of at least four previously unknown vulnerabilities that the worm used during its infection routine. Stuxnet was an unprecedented discovery, containing exploits for SCADA and industrial control systems as well as Windows, and even 10 years after its emergence is considered one of the more sophisticated pieces of malware ever developed.

"There’s definitely still some dragons hiding.”

The description of the print spooler vulnerability that Stuxnet exploited (CVE-2010-2729) is eerily similar to the one Microsoft patched this week, with the notable exception that the bug from 2010 could lead to remote code execution on Windows XP machines.

“A remote code execution vulnerability exists in the Windows Print Spooler service that could allow a remote, unauthenticated attacker to execute arbitrary code on an affected Windows XP system. An attacker who successfully exploited this vulnerability could take complete control of an affected system. An attacker could then install programs; view, change, or delete data; or create new accounts,” the advisory for the older vulnerability says.

While the Stuxnet print spooler flaw was discovered only after the worm did its damage, the newer one was unearthed by researchers at SafeBreach, who reported it to Microsoft. The new bug has drawn the attention of other researchers who have found that it is not only related to the Stuxnet bug, but quite easy to exploit. One line of PowerShell is all it takes to exploit the vulnerability and install a persistent backdoor on a vulnerable system, according to a detailed analysis of the flaw done by Yarden Shafir and Alex Ionescu of Winsider, a Windows consulting and training firm.

“Ironically, the Print Spooler continues to be one of the oldest Windows components that still hasn’t gotten much scrutiny, even though it’s largely unchanged since Windows NT 4, and was even famously abused by Stuxnet,” Shafir and Ionescu said.

“This bug is probably one of our favorites in Windows history, or at least one of our Top 5, due to its simplicity and age — completely broken in original versions of Windows, hardened after Stuxnet… yet still broken.”

The pair also said that they had found and disclosed some other bugs in the same are that have not yet been patched “so there’s definitely still some dragons hiding.”

]]>
<![CDATA[Lawmakers Ask for Cybersecurity Funding for States]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/lawmakers-ask-for-cybersecurity-funding-for-states https://duo.com/decipher/lawmakers-ask-for-cybersecurity-funding-for-states Tue, 12 May 2020 00:00:00 -0400

As Congress continues to work on the contents of the next stimulus package, a bipartisan group of lawmakers are trying to gather support for earmarking some funds to modernize state and local governments' IT infrastructure.

Rep. Michael McCaul (R-Texas), the ranking member of the House Foreign Affairs Committee, Reps. Jim Langevin (D-R.I.), Mike Gallagher (R-Wis.), and Cedric Richmond (D-La.) plan to send a “Dear Colleagues” letter to House lawmakers sometime this week, The Hill reported. The goal of the letter is to encourage more lawmakers to pressure Speaker Nancy Pelosi (D-Calif) and Minority Leader Kevin McCarthy (R-Calif) to include funds in the next funding package that state and local governments can use towards current and future IT modernization projects.

“Unfortunately, our digital infrastructure is (virtually) crumbling,” the lawmakers wrote in the not-yet-sent letter, according to The Hill. “Federal agencies often rely on IT systems that are decades old, and the problems are all the more acute at the state and local level.”

The lawmakers were concerned that state and local IT systems are not able to bear the increased load as people try to access “vital government services.” State unemployment sites have crashed in recent weeks under the weight of all the applications. Some states are trying to hire developers who know legacy programming languages such as COBOL to keep their systems running, because the systems are that old. State and local IT and cybersecurity are dealing with increased responsibility, but are hampered in what they can do with legacy systems.

All of these challenges may result in residents not being able to access the resources they need.

This is not the first letter from House lawmakers to Pelosi and McCarthy. In mid-April, House Homeland Security Committee Chairman Bennie Thompson (D-Miss) sent a letter along with Reps. Cedric Richmond (La.), Dutch Ruppersberger (Md.), and Derek Kilmer (Wash.) to Pelosi and McCarthy requesting cybersecurity funding for states and local governments to use to make sure their networks stay up and running.

“The American public is counting on State and local jurisdictions to implement and deliver COVID-19 relief packages approved by Congress,” that earlier letter said. “Any disruption in the delivery of services would only compound the strain on State and local governments struggling to effectively serve their citizens in the midst of a global pandemic. We cannot let that happen.”

Shortly after, a coalition of technology groups—The Internet Association, BSA, CompTIA, Cyber Threat Alliance, Cybersecurity Coalition, the Global Cyber Alliance, the Alliance for Digital Innovation, and the Information Technology Industry Council—also pressed Pelosi and McCarthy to make cybersecurity funding a priority in future Congressional funding packages. The groups were particularly concerned about the number of ransomware attacks against state and local government entities over the past year, and the likelihood that attackers would target state- and locally-owned and -operated public hospitals.

“State and local entities, however, have long lacked the resources to adequately secure and maintain their digital infrastructure,” the group wrote. “The rise in malicious cyberattacks targeting state and local entities, combined with the chronic lack of workforce, patchwork legacy systems, under-resourced cybersecurity and IT services, and uneven federal assistance creates a greater risk of system failure that interrupts services on which state and local populations depend.”

The ransomware attack against Baltimore last year is expected to have cost the city $18.2 million. Atlanta spent $2.6 million within the first few months of the attack that crippled nearly all its systems. A city auditor’s report later concluded that one of the reasons the ransomware attack had been so devastating for Atlanta was because of the sheer amount of legacy systems the city relied on. The report found nearly 100 servers running outdated versions of Windows, and many of the systems were severely behind on security updates. However, the problem of legacy systems isn't unique to Atlanta. Municipalities have long had to defer modernization plans because they didn't have funds or the authority to embark on these kinds of IT projects.

“This was the reality before COVID-19,” the groups wrote. “Things have become considerably worse in the months since.”

State and local government operated health systems make up nearly 20 percent of the country's community hospitals, the letter from the tech coalition said. Medical facilities, research institutions, and other healthcare organizations have been targeted by ransomware and other cyberattacks over the past few weeks, "at a time when disrupted service is intolerable."

“As it stands, State and local entities are simply not resourced to effectively address these new challenges over the extended period that pandemic mitigation measures will likely need to remain in place,” the groups wrote.

It is not clear whether there is enough political will within Congress to include cybersecurity funding, despite the fact that there is some support for it. It is also unclear when the House of Representatives will begin working on the next stimulus package.

“As we consider additional legislative measures to address the urgent needs of our citizens, we encourage you to consider the digital infrastructure on which so many of our constituents rely to access vital government services,” the House members plan to write in the latest letter.

]]>
<![CDATA[US Exposes New North Korean Malware Tools]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/us-exposes-new-north-korean-malware-tools https://duo.com/decipher/us-exposes-new-north-korean-malware-tools Tue, 12 May 2020 00:00:00 -0400

As part of what has become an ongoing, calculated strategic plan, the U.S. government has published detailed reports exposing three new malware tools it says are being used by state-sponsored attackers associated with the North Korean government.

On Tuesday, the FBI, Department of Homeland Security, and Department of Defense released a series of joint malware analysis reports on individual tools the agencies refer to as Copperhedge, Taintedscribe, and Pebbledash. The three pieces of malware are part of an arsenal used by attack groups that the U.S. government refers to as Hidden Cobra, a catch-all name for North Korean actors associated with that country’s government. Although there is no shortage of state-sponsored attack groups active at any given time, many of which run offensive operations against targets in the United States, U.S. agencies have focused much of their public attention on singling out tools, tactics, and intrusions attributed to North Korean groups.

Beginning in 2017, the Cybersecurity and Infrastructure Security Agency (CISA) arm of DHS has publicly cataloged dozens of individual pieces of malware it attributes to HIdden Cobra actors, including trojans, backdoors, and remote access tools. One of the tools the agency exposed Tuesday is a remote access tool (RAT) called Copperhedge that is targeted at Windows systems. CISA found numerous versions of the tool, which is part of the Manuscrypt family of malware. Researchers have attributed Manuscrypt malware to the North Korean APT group known as Lazarus. Manuscrypt malware variants have been used in attacks on diplomatic targets in the past.

“The Manuscrypt family of malware is used by advanced persistent threat (APT) cyber actors in the targeting of cryptocurrency exchanges and related entities. Manuscrypt is a full-featured Remote Access Tool (RAT) capable of running arbitrary commands, performing system reconnaissance, and exfiltrating data. Six distinct variants have been identified based on network and code features,” the CISA malware analysis report says.

“The variants are categorized based on common code and a common class structure. A symbol remains in some of the implants identifying a class name of ‘WinHTTP_Protocol’ and later ‘WebPacket’.”

The other two tools disclosed by CISA Tuesday, Pebbledash and Taintedscribe, are both implants that are used to maintain persistence on target machines and perform other tasks, such as network discovery.

“These samples uses FakeTLS for session authentication and for network encryption utilizing a Linear Feedback Shift Register (LFSR) algorithm. The main executable disguises itself as Microsoft’s Narrator. It downloads its command execution module from a command and control (C2) server and then has the capability to download, upload, delete, and execute files; enable Windows CLI access; create and terminate processes; and perform target system enumeration,” the description of Taintedscribe says.

Pebbledash has similar functionality, and as has been the case in the past, CISA has uploaded samples of each of the newly disclosed malware tools to the VirusTotal site for public sharing and analysis.

In April, the FBI published an advisory warning organizations to be wary of continued financially motivated attacks from North Korean actors.

“Many DPRK cyber actors are subordinate to UN- and U.S.-designated entities, such as the Reconnaissance General Bureau. DPRK state-sponsored cyber actors primarily consist of hackers, cryptologists, and software developers who conduct espionage, cyber-enabled theft targeting financial institutions and digital currency exchanges, and politically-motivated operations against foreign media companies. They develop and deploy a wide range of malware tools around the world to enable these activities and have grown increasingly sophisticated,” the advisory says.

]]>
<![CDATA[Thunderspy Attack Underscores Existing Thunderbolt Security Issues]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/thunderspy-attack-underscores-existing-thunderbolt-security-issues https://duo.com/decipher/thunderspy-attack-underscores-existing-thunderbolt-security-issues Mon, 11 May 2020 00:00:00 -0400

A researcher has developed a new attack that exploits some weaknesses in the security model of the Intel Thunderbolt specification to bypass the Thunderbolt security settings and even gain access to any of the data on the machine. While the method is new, the security issues with Thunderbolt are not, and for many people the attack does not significanlty increase the risk that was already present.

The attack, known as Thunderspy, exploits vulnerabilities present in Thunderbolt 1, 2, and 3 and it works on any Windows or Linux computer with Thunderbolt ports sold before 2019. But there are some important caveats to the attack: it requires several minutes of physical access to the target computer, removing the backplate of the machine, and some specialized hardware in order to execute. If an attacker has that level of access to a computer, the machine is pretty much at his disposal, regardless of the details of the attack.

A typical attack scenario might involve a victim who leaves her laptop unattended in a hotel room or restaurant long enough for an attacker to take it, remove the back plate to access the correct port, attach the malicious peripheral, and run his code. The attack can work even if the machine is locked and in sleep mode, but the threat for most people is not much higher than with other hardware attacks that require physical access.

The new method, which was developed by Björn Ruytenberg, a masters student at Eindhoven University of Technology, is complex and relies on custom tools Ruytenberg developed to disable the Thunderbolt security settings and rewrite the target chip’s firmware.

Thunderbolt is a hardware connection that Intel developed in order to connect peripheral devices to computers over a faster interface. Ruytenberg disclosed the weaknesses to Intel several months ago and while the vendor acknowledged the issues, there are no fixes available at the moment, nor are there any simple ways to address the core problem in software.

“Despite our repeated efforts, the rationale to Intel's decision not to mitigate the Thunderspy vulnerabilities on in-market systems remains unknown. Given the nature of Thunderspy, however, we believe it would be reasonable to assume these cannot be fixed and require a silicon redesign. Indeed, for future systems implementing Thunderbolt technology, Intel has stated they will incorporate additional hardware protections,” Ruytenberg said in his explanation of the attack.

One of the key foundational problems that Ruytenberg’s attack shines a light on is that once a Thunderbolt-connected device is trusted by the computer, it then has deep access to the machine’s memory. There is some level of authentication between devices and the computer but if an attacker is able to make his own malicious device look like a trusted Thunderbolt device, as Ruytenberg has shown he can do, then he’s in business.

“There’s no real authentication going on. Intel addressed the authentication layer but that’s managed by the flash memory on the chip. They’ve chosen a method that’s cloneable. If I can get my hands on one device, I can extract anything I want to clone any other device,” said Joe FitzPatrick, a hardware security researcher and trainer.

“That could be your laptop or it could be your docking station or anything else.”

In recent years, Intel has added a couple of security features that are designed to protect against some of the weaknesses that Thunderspy exploits. The main addition is a feature called Security Levels that allows individuals to explicitly trust only specific Thunderbolt devices, but Ruytenberg is able to modify the firmware of the Thunderbolt-controlling chip in order to bypass that feature and allow other devices. Thunderbolt devices by design have direct memory access (DMA), giving them the ability to read and write system memory outside of the control of the operating system. This is a powerful function, and attackers have been able to exploit it in the past to steal data through Thunderbolt peripherals, so to defend against those attacks, Intel last year introduced a function called Kernel DMA Protection that restricts Thunderbolt devices to specific memory ranges.

“They need to change the silicon to only run signed code and that’s not a simple thing."

That feature mitigates some of the vulnerabilities that Ruytenberg’s attack exploits, but not all of them, and it is only available on a small number of computers from 2019 forward. Other researchers have uncovered similar issues with Thunderbolt in the past, including the Thunderclap bugs disclosed in 2019.

“In an evil maid threat model and varying Security Levels, we demonstrate the ability to create arbitrary Thunderbolt device identities, clone user-authorized Thunderbolt devices, and finally obtain PCIe connectivity to perform DMA attacks. In addition, we show unauthenticated overriding of Security Level configurations, including the ability to disable Thunderbolt security entirely, and restoring Thunderbolt connectivity if the system is restricted to exclusively passing through USB and/or DisplayPort. We conclude with demonstrating the ability to permanently disable Thunderbolt security and block all future firmware updates,” the attack description says.

In order to fully address the core problems with the Thunderbolt security model, Intel would need to make changes to the chips themselves, an expensive and time-consuming process.

“They need to change the silicon to only run signed code and that’s not a simple thing. They’d have to develop it, manufacture new chips, test them, and then ship them. That could be years,” said FitzPatrick.

For owners of computers running affected chips, the most effective workarounds are to enable the Kernel DMA protection if it’s available and to only connect trusted Thunderbolt peripherals.

Intel said in a statement that machines with Kernel DMA Protection enabled are safe from this type of attack.

"This attack could not be successfully demonstrated on systems with Kernel DMA protection enabled. As always, we encourage everyone to follow good security practices, including preventing unauthorized physical access to computers," the company said.

]]>
<![CDATA[GitHub Expands Scanning to Find Security Flaws in Code]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/github-expands-scanning-to-find-security-flaws-in-code https://duo.com/decipher/github-expands-scanning-to-find-security-flaws-in-code Fri, 08 May 2020 00:00:00 -0400

The goal for secure software isn’t to never have vulnerabilities, but to be able to find vulnerabilities as soon as possible so that they can be fixed. GitHub has expanded its code scanning capabilities to make it easier for developers to identify flaws in projects that are managed on its platform.

The code scanning feature automatically checks the code for common errors that lead to security vulnerabilities every time the developer pushes the code (git push) from the local repository to the GitHub repository. The scan results would be displayed in pull requests, so that anyone using the code would be warned of issues upfront. The code scanning service provides information about which line of code contains a potential vulnerability, why it may be exploitable, and suggestions on how to fix it.

Code scanning should sound familiar, because GitHub has been working on various iterations of this feature over the past year. The feature is based on the CodeQL code analysis engine that came with GitHub’s Semmle acquisition last fall. CodeQL is an object-oriented query language that can identify variations of a vulnerability in the codebase. The code analysis engine was previously available to all public repositories and enterprise customers through GitHub Actions. As part of the Security Lab initiative, security researchers could use CodeQL for free to look for vulnerabilities in open source software.

Secret scanning is another beta feature GitHub announced that is a continuation of something the company has been working on for a while. Secret scanning (previously called token scanning looks for potentially sensitive data in code, such as tokens, private encryption keys, and user credentials. That includes scanning public repositories for credentials for other service providers, such as Alibaba Cloud, Amazon Web Services, Azure, GitHub, Google Cloud, Slack, Mailgun, Twilio, and Stripe that may have been accidentally committed to the project. Available for public repositories since 2018, the feature is now available for private repositories, as well.

GitHub announced the Advanced Security offering at its Satellite virtual event this week. The feature will be available for free for open source projects in public repositories, and as a paid add-on for enterprise customers who want to enable it for private repositories. Advanced Security can be used by developers to help find vulnerabilities during development as well as by security researchers to find and report vulnerabilities in open source projects.

Since code scanning is still a beta feature, GitHub users need to sign up to enable it for their accounts. There is currently a multi-day wait list to have the feature enabled on the account.

On the surface, GitHub’s Advanced Security sounds like just another feature announcement, but it points to a bigger trend of integrating security tools into the developer’s workflow. There are plenty of stand-alone tools that scan code for vulnerabilities, and attempts to integrate scanners with coding IDEs to display errors right in the editors. However, all those tools expect the developers to know about security and using them.

The thing about open source projects is that anyone can look over the code to find vulnerabilities, but there is no way to guarantee that someone with the right expertise is actually looking at the code. Many developers may not have the secure coding knowledge to avoid some of the more common mistakes, and unless someone sends a vulnerability report, they would never know about the flaws that need to be addressed. Project maintainers may also not know who is using the code and needs to be notified when issues are fixed.

GitHub has automated some of these tasks, such as the fact that it automatically notifies project owners if any of the dependencies they are using in their code have a vulnerability or has been updated with a new version.

And now with scanning results showing up directly in the pull requests, development teams who may not have the time or skills (and knowledge) to look through the codebase for vulnerabilities know what to fix. This brings secure coding to a broader group of developers without changing their workflow to install or run extra tools. They just have to enable the feature in their accounts.

It also benefits large development teams with millions of lines of code, because the analysis engine can look for similar vulnerabilities in different parts of the codebase, even if the actual code blocks are slightly different. This lets developers identify and eliminate classes of vulnerabilities from their software.

Just because GitHub is taking on the challenge of scanning code for vulnerabilities doesn’t mean everything is set in the world of secure software. Developers still have to fix the issues that are displayed in the pull request. Project maintainers have to communicate to downstream users when issues are resolved. And even though a significant number of open source projects are maintained on GitHub, there are plenty that aren’t, and those project maintainers will still need to have a way to receive and act on vulnerability reports as well as do their own security scanning.

]]>