<![CDATA[Decipher]]> https://decipher.sc Decipher is an independent editorial site that takes a practical approach to covering information security. Through news analysis and in-depth features, Decipher explores the impact of the latest risks and provides informative and educational material for readers curious about how security affects our world. Thu, 17 Sep 2020 00:00:00 -0400 en-us info@decipher.sc (Amy Vazquez) Copyright 2020 3600 <![CDATA[Tech, Privacy Groups Urge Senators to Oppose EARN IT Act]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/tech-privacy-groups-urge-senators-to-oppose-earn-it-act https://duo.com/decipher/tech-privacy-groups-urge-senators-to-oppose-earn-it-act Thu, 17 Sep 2020 00:00:00 -0400 With the EARN IT Act still awaiting action in the Senate, more than 25 technology, civil liberty, and open society organizations have sent a letter to senators expressing strong opposition to the bill and urging them to vote against it.

The bill has come under sharp criticism from privacy and security experts for a number of reasons, most notably for the effect that it would have on the ability of platform providers to offer encrypted services. The stated purpose of the bill is to prevent the publication and spread of child exploitation material, and it would do so by setting up a commission that would create a set of voluntary best practices that platform providers would be encouraged to comply with in order to maintain their protection from prosecution under Section 230 of the Communications Act. That section is what protects providers from being held liable for the content people post on their platforms.

EARN IT would essentially discourage platform providers from offering services such as end-to-end encrypted messaging or email because they are not able to examine the contents of the messages for illegal material. Although the text of the bill does not explicitly mention encryption or secure messaging services, the effect on providers of those services would be severe. In the letter sent to senators this week, the groups say the EARN IT Act would undermine and disincentivize providers from offering those services, even after the addition of an amendment that says providers would not be liable for violating the law simply because they offer encrypted services.

“As amended, the bill invites repeated and protracted litigation about whether a provider’s decision to provide encrypted services was the entire cause for its failure to adopt certain practices to combat CSAM. For example, the amendment does not clearly protect providers against liability if they do not comply with mandates to employ certain techniques that are incompatible with secure end-to-end encryption,” the letter says.

“Techniques such as client-side scanning and sender authentication can give law enforcement access to communications content. But, each technique undermines the promise of end-to-end encryption—that only the sender and recipient will be able to understand the content of the communication. Use of such techniques would be incompatible with a secure end-to-end encrypted service.”

The letter is signed by the Electronic Frontier Foundation, Center for Democracy and Technology, Fight for the Future, Freedom of the Press Foundation, and many other groups and also raises concerns about the commission the bill would establish to set out the best practices for providers. The commission would be headed by the attorney general and include many other members from law enforcement agencies.

“The Commission is free to, and likely will, recommend against the offering of end-to-end encryption, and recommend providers adopt techniques that weaken the cybersecurity of their product. While these best practices would be voluntary, they could result in reputational harm to providers if they choose not to comply, and inform how judges evaluate a provider’s liability,” the letter says.

The EARN IT Act was introduced in March and amended in July and has been on the Senate legislative calendar since late July.

]]>
<![CDATA[US Charges Five Alleged Members of APT41 Group]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/us-charges-five-alleged-members-of-apt41-group https://duo.com/decipher/us-charges-five-alleged-members-of-apt41-group Wed, 16 Sep 2020 00:00:00 -0400

The Department of Justice on Wednesday announced charges against five Chinese nationals and the arrest of two Malaysian men it alleges are connected to the APT41 attack group that is responsible for a high volume of attacks around the world in recent years, including intrusions at software, pharmaceutical, and technology companies, as well as non-profits and universities.

The indictments allege that the five men were involved in a variety of intrusions in the U.S. and elsewhere, stole money, digital assets, and intellectual property, and helped create and maintain an extensive network of compromised servers, C2 domains, fraudulent accounts, and other assets. The men charged in two separate indictments are Zhang Haoran, Tan Dailin, Jiang Lizhi, Qian Chuan, and Fu Qiang. In addition to the indictments, the U.S. government also worked with Malaysian authorities to arrest two unnamed businessmen that the Justice Department alleges were involved with helping two of the alleged APT41 attackers in some of their operations.

“The scope and sophistication of the crimes in these unsealed indictments is unprecedented. The alleged criminal scheme used actors in China and Malaysia to illegally hack, intrude and steal information from victims worldwide,” said Michael R. Sherwin, Acting U.S. Attorney for the District of Columbia. “As set forth in the charging documents, some of these criminal actors believed their association with the PRC provided them free license to hack and steal across the globe. This scheme also contained a new and troubling cyber-criminal component – the targeting and utilization of gaming platforms to both defraud video game companies and launder illicit proceeds.”

The indictments are part of a concerted effort by the United States government to expose and deter intrusion campaigns conducted by teams associated with the Chinese government. In February, the U.S. charged four members of China’s People’s Liberation Army with the intrusion that led to the Equifax data breach and the Cybersecurity and Infrastructure Security Agency (CISA) regularly publishes details of tools and malware used by attackers affiliated with the Chinese government. In fact, this week CISA published a detailed warning about attackers working for the Ministry of State Security exploiting flaws in networking gear and VPNs.

APT41 is a prolific, well-funded, and accomplished group that has been active for nearly 10 years and has several high-profile intrusions to its name. The group has operators and developers with expertise in both Linux and Windows and has a broad toolset at its disposal, including custom exploits and malware. The group will also employ public and open source tools and are known to focus on vulnerabilities that have been public for months or years but have not been patched in target organizations. One of the indictments announced Wednesday alleges that Lizhi, Chuan, and Qiang conducted operations on behalf of a Chinese company called Chengdu 404 Network Technology that included intrusions at more than 100 companies around the world, as well as at government agencies in India and Vietnam.

“The defendants associated with Chengdu 404 employed sophisticated hacking techniques to gain and maintain access to victim computer networks. One example was the defendants’ use of ‘supply chain attacks,’ in which the hackers compromised software providers and then modified the providers’ code to facilitate further intrusions against the software providers’ customers. Another example was the hackers’ use of C2 ‘dead drops,’ which are seemingly legitimate web pages that the hackers created, but which were surreptitiously encoded instructions to their malware,” the announcement says.

The operations attributed to APT41 run the gamut and some of them are quite well known, including an intrusion in which the group was able to gain access to the source code for the CCleaner utility and insert malicious code. More than two million copies of the compromised version of the utility were downloaded, and the attackers then tried to use their position to run further attacks on a small number of the computers involved. The group is also associated with the use of the infamous Winnti malware, which has been used in several attacks, many of which targeted video game companies or players.

“Today’s announcement demonstrates the ramifications faced by the hackers in China but it is also a reminder to those who continue to deploy malicious cyber tactics that we will utilize every tool we have to administer justice,” said FBI Deputy Director David Bowdich.

]]>
<![CDATA[House Passes IoT Security Bill]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/house-passes-iot-security-bill https://duo.com/decipher/house-passes-iot-security-bill Tue, 15 Sep 2020 00:00:00 -0400

The House of Representatives has unanimously passed a bipartisan bill setting minimum security requirements for Internet of Things devices connected to federal networks. The next step: get the Senate to vote on its version of the bill.

The Internet of Things Cybersecurity Improvement Act would require the National Institute of Standards and Technology to create standards and guidelines for how federal agencies should use and manage IoT purchased by the government. The list includes computers, mobile devices, and pretty much anything that can be connected to the Internet.

“IoT devices are more and more common and fulfill greater and greater functions in our government,” said Rep. Robin Kelly (D-Ill.), one of the backers of the bill. “By establishing some baseline standards for the security of these devices, we will make our country and the data of American citizens more secure.”

Security Standards

There are 10 billion IoT already in use, and Gartner estimates more than 25 billion devices online by 2021. IoT is well-entrenched in the federal government, with different agencies heavily relying on the massive amount of data collected in real-time by these devices. The State Department, for example, has sensors in all its embassies around the world collecting air quality data. Internet-connected devices make a lot of promises about all the things that can be done, but they are also highly vulnerable to attack. Despite the growing threat against these devices, there currently are no national standards for IoT security.

“Currently there are no national standards to ensure the security of these connected devices,” Rep. Carolyn Maloney (D-NY), chairperson of the House OVersight and Reform Committee, said during the floor vote. “Protecting our nation from cyber threats is an ongoing, interactive process that requires established, baseline standards and constant vigilance.”

The minimum security standards are for devices purchased and used by the federal government. Theoretically, manufacturers can have two versions of the device—one that meets (or exceeds) the minimum security standards as defined by NIST that the government can buy, and one that doesn’t have to worry about the government requirements and is available to anyone. In reality, it is more likely that IoT vendors will adopt the same requirements across the board instead of trying to support two versions of their products. The hope is that the minimum security requirements will become default industry standard that would also apply to commercial devices, said Sen. Mark Warner (D-Va), who introduced the Senate version of the bill.

“Frankly, manufacturers today just don’t have the appropriate market incentives to properly secure the devices they make and sell.” Warner said in a statement.

Fixing Flaws

The legislation requires the Office of Management and Budget to review existing federal government information security policies and develop guidance so that agencies can meet NIST’s recommendations. NIST and OMB would have to update IoT security standards, guidelines, and policies at least every five years.

IoT manufacturers will also have to develop basic patching and remediation capabilities for their devices so that vulnerabilities can be fixed. Vendors would have to notify agencies of any vulnerabilities that could leave the government vulnerable to attack. The Department of Homeland Security would need to publish guidance on coordinated vulnerability disclosures for contractors and vendors.

The ability to fix vulnerabilities when they are found is key for IoT security. While there are higher-end devices which can be updated (but not always easily), a large number of IoT do not receive security updates at all. Most of them don’t even have a mechanism that will allow for updates. There should be a way for agencies to install updates, and there has to be a way for vendors to receive vulnerability disclosure reports, said Rep. Will Hurd (R-Texas), another backer of the bill. This law would help stop insecure devices from entering the federal supply chain at all, he said.

“If you’re going to introduce a widget into the digital infrastructure of the federal government and it has a known vulnerability, you either have to patch it or have some way to address it,” said Hurd.

Road to Regulation

Reps. Hurd and Kelly introduced this bill in House Oversight Committee’s IT subcommittee back in March 2019. It passed full committee in June, and then stayed in limbo until this month. The Senate Homeland Security and Governmental Affairs Committee passed the Senate version of the bill in June 2019. The Senate has not yet picked up this bill for a floor vote, and it isn’t clear when that may happen.

The United Kingdom has been working on similar regulations, but for consumers. Internet-connected devices will have a label indicating how much they meet security standards, giving consumers information about how secure the IoT is before purchase. The minimum requirements are: unique default passwords; the length of time the device will receive security updates is specified; and a method of contact for reporting vulnerabilities in the product. Once the program becomes mandatory, companies will not be allowed to sell their products without the labels. It isn’t clear how the UK will enforce that when so many of these devices are manufactured in and shipped from other countries.

California Senate Bill 327 attempted to fill the void that existed because there wasn’t a national standard for IoT security. SB-327, which was proposed in 2018 and became law in January, required “reasonable security feature or features that are appropriate to the nature and function of the device.”

“The Internet of Things is showing just how innovative humans can be, but like most innovations, IoT has the potential to be misused and abused by bad actors,” said Hurd. “If our security practices for using the Internet of Things does not evolve as our use of it grows, then we will find out how innovative criminals, hackers and hostile foreign governments can be.”

]]>
<![CDATA[Chinese State-Sponsored Attackers Target F5, VPN Flaws]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/chinese-state-sponsored-attackers-target-f5-vpn-flaws https://duo.com/decipher/chinese-state-sponsored-attackers-target-f5-vpn-flaws Mon, 14 Sep 2020 00:00:00 -0400

Attackers affiliated with the Chinese Ministry of State Security have been exploiting recently disclosed vulnerabilities in popular networking and VPN appliances during recent intrusion attempts against federal government agencies and enterprises, U.S. cybersecurity officials said.

The attacks are part of a long-running campaign by groups in China that U.S. officials say have been working directly for or at the direction of the MSS for more than 10 years. In an alert published Monday, the Cybersecurity and Infrastructure Security Agency (CISA) said that MSS-affiliated teams have been targeting the recent critical flaw in the F5 BIG-IP networking appliances, as well as publicly disclosed vulnerabilities in Citrix and Pulse Secure VPNs. Details of those vulnerabilities have been public for some time, and patches are available for all three, but attackers are targeting organizations that have not yet upgraded. Attacks against the F5 flaw (CVE-2020-5902) began almost immediately after the company disclosed it on June 30 and CISA said it has responded to several incidents in government agencies and enterprises involving successful exploits against the bug.

“CISA analysts consistently observe targeting, scanning, and probing of significant vulnerabilities within days of their emergence and disclosure. This targeting, scanning, and probing frequently leads to compromises at the hands of sophisticated cyber threat actors. In some cases, cyber threat actors have used the same vulnerabilities to compromise multiple organizations across many sectors. Organizations do not appear to be mitigating known vulnerabilities as quickly as cyber threat actors are exploiting them,” the CISA alert says.

The MSS-affiliated attackers have also been targeting serious flaws in VPN products from Pulse Secure and Citrix. VPNs have always been attractive targets for attackers, but the large-scale shift to remote work this year has led to a huge increase in VPN usage, which leads to an increase in attention from attackers as VPN appliances can be ideal entry points into corporate networks for adversaries.

“CISA has conducted multiple incident response engagements at Federal Government and commercial entities where the threat actors exploited CVE-2019-11510—an arbitrary file reading vulnerability affecting Pulse Secure VPN appliances—to gain access to victim networks. Although Pulse Secure released patches for CVE-2019-11510 in April 2019, CISA observed incidents where compromised Active Directory credentials were used months after the victim organization patched their VPN appliance,” the CISA advisory says.

State-sponsored attack groups are often associated with the use of advanced tools and techniques and exploits for zero days. While many groups do employ those tactics when necessary, attackers typically will take the path of least resistance, and CISA said that much of the activity it has seen from the Chinese state-sponsored groups in recent months has included the use of publicly available information, exploits, tools, and techniques. It’s not unusual for attackers to copy off others’ papers, and CISA said in its advisory that the MSS-affiliated groups have been making extensive use of tools such as Cobalt Strike and Mimikatz, and also utilize public vulnerability databases and other information sources commonly used by defenders. Databases of vulnerabilities maintained by MITRE and the National Institute of Standards and Technology are key sources of information for defenders but attackers rely on them to help creat attack plans, too.

“While using these data sources, CISA analysts have observed a correlation between the public release of a vulnerability and targeted scanning of systems identified as being vulnerable. This correlation suggests that cyber threat actors also rely on Shodan, the CVE database, the NVD, and other open-source information to identify targets of opportunity and plan cyber operations,” the CISA advisory says.

“Together, these data sources provide users with the understanding of a specific vulnerability, as well as a list of systems that may be vulnerable to attempted exploits. These information sources therefore contain invaluable information that can lead cyber threat actors to implement highly effective attacks.”

The activity that CISA describes in the new advisory is tied to groups that the U.S. government and law enforcement agencies have been tracking for some time. In July the Department of Justice disclosed that two Chinese citizens working with the MSS had been indicted by a federal grand jury for attacks against government and private organizations in the U.S.

]]>
<![CDATA[Attackers Verify O365 Credentials On Azure AD]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/attackers-verify-o365-credentials-on-azure-ad https://duo.com/decipher/attackers-verify-o365-credentials-on-azure-ad Fri, 11 Sep 2020 00:00:00 -0400

Attackers are cross-checking stolen Office 365 credentials on Azure Active Directory in real-time after victims type them into a malicious phishing page.

When users enter their Office 365 credentials into a phishing page, the malicious page makes a call to the Office 365 API to instantly verify the credentials against the organization’s Azure Active Directory infrastructure, Armorblox researchers said. Authentication APIs are commonly used by applications and servers to access certain types of user data. The attackers are cross-checking credentials in real-time and accessing the account before the victim even realizes something went wrong and takes steps to fix the situation.

"This immediate feedback allows the attacker to respond intelligently during the attack," Armorblox wrote.

If the verification is successful, the user is redirected to zoom.com, the videoconferencing site. If the entered credentials are incorrect, the victim is redirected to login.microsoftonline.com to potentially hide the attempt to steal credentials. If the credentials are wrong, the user would not be alarmed or suspect a phishing attack. If the entered password text is empty or too short, the user is forced to reenter the values.

“Our threat researchers verified the real-time nature of the site by updating the script with a test login and a dummy password and saw a failed login attempt from Provo, Utah in the Azure Active Directory Sign-In portal,” the researchers said.

The phishing scams are likely targeted and not spray and pray

There is no special vulnerability being exploited here—the attackers are just being very creative about how they are using the APIs.

Armorblox analyzed a campaign in which the victim, a senior executive at a large enterprise company, received a message containing a file designed to look like a payment remittance report. When the victim tried to open the file attachment, the victim saw a page resembling the organization’s Office 365 sign-in page with a message, “Because you’re accessing sensitive info, you need to verify your password.” The phishing messages were sent using Amazon Simple Email Service to bypass DKIM (Domai Keys Identified Mail) and SPF (Sender Policy Framework) checks on the mail server.

Armorblox researchers concluded this was part of a very targeted spear-phishing campaign, as the phishing page used the correct domain name. The enterprise had recently changed domains so that the email address and Active Directory used different domain names. The attackers were aware of the change, leading researchers to believe the attackers had put in some effort researching the organization and the executive. The attack page also appears to not have been used all that often, suggesting that attackers are very careful about which individuals they are targeting.

“Our estimates show there have been 120 odd visits to this website globally since the beginning of June. The sparse number shows that the phishing scams are likely targeted and not spray and pray,” Armorblox said.

This was not a fly-by-night, amateur operation. The phishing email was generated via a customizable toolkit. The kit itself appeared to be well-written with thorough code comments with instructions on how to customize the kit to point to a specific target, Armorblox said. It was also global.

Remediation will need to be "thorough."

The attacker “customized a Malay language toolkit to attack an executive based in southwest United States using a domain registered in Singapore that’s hosted in the northwest United States by a hosting company based out of India,” Armorblox said.

Attackers typically make the effort to steal Office 365 credentials because those usernames and passwords may be protecting more than just documents and other files. The organization may be relying on those usernames and passwords to handle authentication for its network environment. If attackers get their hands onto legitimate Office 365 credentials, those attackers also have access to all the sites integrated into Active Directory federated with Azure.

“The attacker is also immediately aware of a live compromised credential and allows him to potentially ingratiate himself into the compromised account before any remediation,” Armorblox said.

Remediation in this case will need to be "thorough," Armorblox said. Administrators will need to look at all outbound emails that have been sent, check to see what kind of changes have been made to accounts (such as auto-forwarding messages to an external mailbox), and review any third-party apps that have been granted access to Office 365. Administrators will also need to go over all activity across all Office 365 properties, such as Word, Excel, and OneDrive.

Organizations need to think about how they protect Office 365 users, since they are highly attractive targets and vulnerable to attack. Compromising Office 365 credentials isn’t an attack technique exclusive to phishing groups. Microsoft researchers believe Russia-linked threat group APT28 is using password-spraying and brute-force to harvest Office 365 credentials belonging to organizations in the United States and United Kingdom directly involved in elections.

APT28 is likely targeting Office 365 in order to be able to move laterally through organization networks or mount espionage campaigns. Microsoft said APT28 unsuccessfully targeted nearly 7,000 Office 365 accounts across 28 organizations between Aug. 18 and Sept. 3.

]]>
<![CDATA[Raccoon Attack Can Compromise Some TLS Connections]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/raccoon-attack-can-compromise-some-tls-connections https://duo.com/decipher/raccoon-attack-can-compromise-some-tls-connections Thu, 10 Sep 2020 00:00:00 -0400

A team of academic researchers has discovered a timing vulnerability in some versions of the TLS specification that can allow an attacker to decrypt encrypted connections if some highly specific conditions are met. Although the flaw is potentially quite dangerous, the researchers said it is very difficult to exploit and is likely not a method a real world attacker would use.

The attack that the team developed is highly complex, and the flaw only exists in some specific implementations of TLS that use static cipher suites for Diffie-Hellman key exchange or reuse the ephemeral keys in TLS-DHE cipher suites. In order to be vulnerable to the attack, a server must meet one of those conditions and an attacker must also be able to run highly precise timing measurements against the server. That timing measurement is essentially a side channel that allows the attacker to determine whether the first byte of the shared secret between the server and the client in the shared secret of the DH key exchange begins with a zero.

The attack, which is known as Raccoon, affects TLS 1.2 and previous versions, which specify that any leading bytes beginning with zero in the premaster secret are stripped out. The premaster secret is the shared key used by the client and server to compute the subsequent TLS keys for each session.

“Since the resulting premaster secret is used as an input into the key derivation function, which is based on hash functions with different timing profiles, precise timing measurements may enable an attacker to construct an oracle from a TLS server. This oracle tells the attacker whether a computed premaster secret starts with zero or not,” the description of the attack says.

“Based on the server timing behavior, the attacker can find values leading to premaster secrets starting with zero. In the end, this helps the attacker to construct a set of equations and use a solver for the Hidden Number Problem (HNP) to compute the original premaster secret established between the client and the server.”

A successful attack using this technique could allow the adversary to get access to any of the encrypted data sent between the client and server, including sensitive information such as account passwords, financial data, or corporate documents. However, not only does the attacker need to find a server that uses DH(E) and reuses session keys, he needs to be able to observe the first connection between the client and server. The attacker also needs to be close enough to the server to conduct the precise timing measurements required. That’s a long list of requirements, and although it’s not out of reach for some classes of attackers, the researchers said adversaries are more likely to use other, simpler methods.

"My colleagues then implemented math. And we got the attack working.”

However, despite all of the complex math involved in the work the research team from Ruhr University Bochum, Tel Aviv University, Paderborn University, and Bundesamt für Sicherheit in der Informationstechnik, performed to perfect the technique, a real world attacker would not necessarily need to understand all of the details in order to execute the attack.

“I don't think the attack requires an advanced understanding to pull off. If one would write a tool to do the math everyone could use it. The attack's difficulty rises from the rare circumstances in which the attack works and the complexity of performing high precision timing measurements. The side channel is only a few thousand CPU cycles such that noise on the network or on the victim server makes the measurements noisy,” Robert Merget, the lead author of the paper, said in an email.

Merget maintains the TLS-Attacker framework, which analyzes TLS libraries, and he came upon the seed idea for the Raccoon attack while researching a different type of attack.

“I looked at the key derivation in SSL 3 looking for another attack and then noticed that this could not be implemented in constant time. After a quick thought, I knew that this would also mean that the newer TLS PRF's could also not be implemented in constant time for DHE. The attack is much easier to see when looking at SSL 3, as the hash function call, which creates the side channel is in plain sight,” Merget said.

“In the newer standards its implicit through the HMAC call, which is really a detail of HMAC. I didn't know if or how this could be exploited though. I consulted my colleagues and analyzed the problem further, as this looked very similar to a Bleichenbacher attack (it is a little different as you cannot really choose the number you multiply with). We then noticed that the Hidden Number Problem from Dan Boneh et al. was exactly what you needed to reconstruct the shared secret. My colleagues then implemented math. And we got the attack working.”

The researchers disclosed the details of the attack to browser vendors, large server operators, and vendors that were affected. Some F5 BIG-IP appliances are affected, and the company has published guidance on mitigating the vulnerability. Mozilla has disabled the affected cipher suites in Firefox, a move the company was already planning. Some versions of OpenSSL are affected, as well, and the maintainers have moved all of the remaining Diffie-Hellman cipher suites to the “weak-ssl-ciphers” list as a result.

]]>
<![CDATA[Attackers Use Cloud Tool to Target Docker, Kubernetes]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/attackers-use-cloud-tool-to-target-docker-kubernetes https://duo.com/decipher/attackers-use-cloud-tool-to-target-docker-kubernetes Wed, 09 Sep 2020 00:00:00 -0400

An attack group is using an open source visualization and monitoring tool for cloud environments to compromise Docker and Kubernetes systems, researchers warned.

The TeamTNT attack group uses Weave Scope, a monitoring, visualization, and control software for Docker, Kubernetes, Distributed Cloud Operating System (DC/OS), and Amazon Web Services Elastic Compute Cloud (EC2), as a backdoor to map the targeted environment and execute commands, security company Intezer and Microsoft said in separate blog posts.A legitimate monitoring tool integrated in to cloud platforms, Weave Scope lets users watch the container’s running processes and network connections, as well as to run shells in clusters as root. Weave Scope does not require authentication by default to use.

“When abused, Weave Scope gives the attacker full visibility and control over all assets in the victim’s cloud environment, essentially functioning as a backdoor,” wrote Intezer security researcher Nicole Fishbein.

TeamTNT is taking advantage of a misconfiguration that exposes a Docker API port to deploy the software as a type of backdoor to the cloud environment. Microsoft said it has seen cluster administrators enable public access to this interface and other similar services.

Attackers “take advantage of this misconfiguration and use the public access to compromise Kubernetes clusters,” Microsoft said.

The group creates a new privileged container and then uses the exposed port to mount the container’s file system on to the targeted server. The container both loads and executes cryptocurrency miners. The second phase of the attack involves setting up a local privileged user on the host server and connecting to it over SSH.

Once the attackers are connected to the host server, they can use Weave Scope to map the infrastructure, monitor individual systems, install applications, consume computing resources, and execute shell commands in containers.

“Not only is this scenario incredibly rare, to our knowledge this is the first time an attacker has downloaded legitimate software to use as an admin tool on the Linux operating system,” Fishbein said.

TeamTNT previously employed a worm against Docker and Kubernetes systems. The group has been linked to a cryptocurrency-mining botnet which steals AWS credentials from servers. The attackers have also relied on malicious Docker images uploaded to Docker Hub to compromise cloud environments. The switch to Weave Scope meant TeamTNT no longer needed to rely on malware on compromised machines.

Attackers abusing built-in administrator capabilities and tools have an easier time hiding their activities from security teams and network administrators. Security tools can look for signs of malware, but in this scenario, there is no malware. The defenders have to try to identify when a legitimate tool is being used in an unauthorized manner.

Organizations should close the exposed Docker API port and bock incoming connections to port 4040, Microsoft recommended. Weave Works, the company behind Weave Scope, has also released an advisory on how administrators can protect the tool from abuse. The company said Scope should not be run as a public server, and it should run in read-only mode, not with administrator privileges. Weave Works also recommended deploying the tool with an authentication service so that only authorized users can access the tool.

"Misconfigured services seem to be among the most popular and dangerous access vectors when it comes to attacks against Kubernetes clusters," said Microsoft.

]]>
<![CDATA[Traditional is Best When Converting Stolen Money to Clean Cash]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/traditional-is-best-when-converting-stolen-money-to-clean-cash https://duo.com/decipher/traditional-is-best-when-converting-stolen-money-to-clean-cash Wed, 09 Sep 2020 00:00:00 -0400

When talking about cyberattacks, the theft of money is often treated as the end of the story. However, the moment the money leaves the bank account is actually the beginning of a new story, one that involves a shadowy web of money mules, intermediate accounts, and front businesses.

An analysis of the money’s winding journey from victim accounts to criminal hands found that criminals rely on traditional money laundering methods of money mules, front companies, cash businesses, and investments in high-end items, according to a joint Follow The Money report from BAE Systems and SWIFT [the Society for Worldwide Interbank Financial Telecommunication]. While some groups prefer one method over another, many combine different techniques in their quest for converting the money they stole into “clean” cash that can’t be traced by law enforcement, the report said.

The stolen money in this analysis typically came from attacks on a bank’s money transferring system (such as SWIFT’s messaging system, for example), or attacks against banking infrastructure such as ATMs and personal accounts. The goal is to transform the origin of the money to something legitimate, such as employee payroll or proceeds after selling property.

“This report focuses on money laundering related activities necessary for cyber attackers to conduct and ‘cash out’ a successful attack and avoid the money subsequently being traced,” said Simon Viney, cybersecurity financial services sector lead at BAE Systems Applied Intelligence.

Recruiting Helpers

Criminals heavily rely on money mules to move the stolen funds around until they can be safely cashed out. The most common method remains withdrawing money from ATMs and spending them in cash businesses controlled by the criminals, or buying and selling expensive items.

These money mules take on other roles, as well, such as using their own personal accounts to receive money and transfer them to other accounts; opening new accounts using fake IDs; and re-shipping expensive items purchased with stolen money to someone else.

Criminals often use legitimate-sounding job advertisements to recruit unsuspecting job seekers into the money mule operation, and these individuals generally are unaware they are being used to transform the stolen money into legitimate income streams. Recruitment efforts increasingly target young adults looking for a way to pay for higher education (college, graduate school, etc), and adults who had recently lost their jobs.

In cases where banks verify new accounts with know-your-customer checks, criminal groups may recruit insiders at the financial institution to help avoid or undermine the process.

Some gangs plan ahead of time and set up the bank accounts which would be used to transfer money, months in advance to make them seem more legitimate. It’s also possible to buy compromised bank accounts, which criminals use to transfer money in and out without the original owner noticing.

Cleaning Money

ATM cashouts remain common, but some criminal gangs set up front companies to pass the money through the business. Tracing the origin of funds in a business with multiple revenue streams and outgoing expenses is downright difficult. The report noted that cybercriminals seem to prefer setting up textile, garment, fishery, and seafood businesses as their fronts, especially in parts of East Asia.

Casinos are also popular, as criminals use the stolen money to buy chips for gambling. When the chips are returned to the casino’s cashier to receive the money—the gambling winnings—the illegal money is transformed into money obtained legitimately from a legal source.

Stolen funds can be converted into any number of things other than cash, as there are things that can retain their value and also be less likely to raise red flags with law enforcement. The report identified high-end items such as expensive watches and jewelry, gold bars, fine art, luxury penthouses, and even, tropical islands.

How cybercriminals cash out and spend stolen funds says a lot about the gang’s level of professionalism and experience, SWIFT and BAE Systems said in the report. Inexperienced criminals often make extravagant purchases, which law enforcement authorities are more likely to notice.

Cryptocurrency Connection

Cryptocurrencies may be hot in certain criminal circles, but when it comes to money laundering, traditional methods are still preferred.

"Identified cases of laundering through cryptocurrencies remain relatively small compared to the volumes of cash laundered through traditional methods," SWIFT said in the report.

However, digital transactions have their own allure because it is easier in some cases to open up new accounts, especially since most exchanges don’t bother with know-your-customer checks that banks perform during account creation. It becomes harder to track the origin of the transactions on a high-activity account, especially after the money has moved around multiple times. Criminals are increasingly using services such as mixers and tumblers, which obscure the source of cryptocurrency transactions by blending stolen money with large amounts of legitimate transactions. There are also many ways to convert cryptocurrency to flat currency other than linking a bank account to the exchange account. One example is buying debit cards loaded with cryptocurrency, and those cards can be used with special ATMs to withdraw cash or in regularly card transactions.

SWIFT said there are online marketplaces where users with nothing but an email address can use cryptocurrency to buy high-end products, land, and real estate.

A criminal gang adapted the traditional ATM cashout attack to buy cryptocurrency with the withdrawn money, rather than buying something with the stolen cash. An Eastern European gang used the stolen money to set up its own Bitcoin farm in East Asia and generate Bitcoins. The newly-minted Bitcoins were spent in Western Europe. When the gang was arrested, authorities found 15,000 Bitcoins valued at over $109 million, two sports cars, and jewelry worth $557,000 in the group leader’s house, SWIFT said.

Cryptocurrency appears to be the laundry method of choice for the Lazarus Group, a well-known attack group believed to be sponsored by the North Korean government. Lazarus typically passes cryptocurrency through accounts on different exchanges multiple times to “obfuscate the origin of the funds.” The money—in cryptocurrency form—is eventually converted to cash via the bank account linked to the exchange account, or used to purchase gift cards, which are then used at other exchanges to buy more cryptocurrency. Eventually, once it is harder to trace all the transactions, the money is converted back to regular currency and transferred to North Korea. The Lazarus Group has been linked to 2016’s massive heist against Bangladesh Bank, although the report doesn’t explicitly say this method was used for any of that stolen money.

Traditional is still best, but SWIFT said it expects to see more examples of cryptocurrency being used for money laundering.

]]>
<![CDATA[Attacks Target Critical Flaw in WordPress File Manager Plugin]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/attacks-target-critical-flaw-in-wordpress-file-manager-plugin https://duo.com/decipher/attacks-target-critical-flaw-in-wordpress-file-manager-plugin Tue, 08 Sep 2020 00:00:00 -0400

Attackers are exploiting a critical vulnerability in a popular WordPress plugin that enables an adversary to run arbitrary commands and upload files to a target WordPress site.

The flaw is in the File Manager plugin, which has more than 700,000 active users and is designed to help administrators manage files on their WordPress sites. The plugin includes a third-party library called elFinder and the vulnerability results from the way that File Manager renamed an extension in elFinder.

“The core of the issue began with the File Manager plugin renaming the extension on the elFinder library’s connector.minimal.php.dist file to .php so it could be executed directly, even though the connector file was not used by the File Manager itself. Such libraries often include example files that are not intended to be used “as-is” without adding access controls, and this file had no direct access restrictions, meaning the file could be accessed by anyone. This file could be used to initiate an elFinder command and was hooked to the elFinderConnector.class.php file,” Chloe Chamberland of Wordfence, a WordPress security firm, said in a post on the vulnerability and attacks exploiting it.

The vulnerability was introduced in version 6.4 of File Manager, which was released in May. But it wasn’t until late August that researchers first saw exploit attempts against the bug. An exploit for the vulnerability was posted on GitHub in the last week of August, and it wasn’t until several days later, on Sept. 1, that the maintainers of File Manager released an updated version that fixed the bug. Although the fixed version has been available for a week, researchers say few of the WordPress sites running the plugin have updated.

“Sites not using this plugin are still being probed by bots looking to identify and exploit vulnerable versions of the File Manager plugin, and we have recorded attacks against 1.7 million sites since the vulnerability was first exploited. Although Wordfence protects well over 3 million WordPress sites, this is still only a portion of the WordPress ecosystem. As such, the true scale of these attacks is larger than what we were able to record,” Ram Gall of Wordfence said in a post on Sept. 4.

The severity of the vulnerability makes the need to update quite urgent, especially with automated scans for the bug ongoing. Identifying vulnerable sites is a trivial task and with an exploit publicly available, time is of the essence, particularly given the fact that an attacker would be able to upload arbitrary files to the site after a successful exploit.

“This exploit quickly gained popularity due to its very high impact and low requirements, where we have currently seen hundreds of thousands of requests from malicious actors attempting to exploit it,” Antony Garand of Sucuri said in a post about the flaw.

“The first attack we noticed was on August 31st, one day before the plugin was updated, with an average of 1.5k attacks per hour. On September 1st, we had an average of 2.5k attacks per hour, and on September 2nd we had peaks of over 10k attacks per hour.”

]]>
<![CDATA[CISA Issues Final Order on Federal Vulnerability Disclosure, But Questions Remain]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/cisa-issues-final-order-on-federal-vulnerability-disclosure-but-questions-remain https://duo.com/decipher/cisa-issues-final-order-on-federal-vulnerability-disclosure-but-questions-remain Fri, 04 Sep 2020 00:00:00 -0400

Federal executive branch agencies are officially on the clock and now have six months to develop and publish a vulnerability disclosure policy after the Cybersecurity and Infrastructure Security Agency (CISA) published the final version of a binding operational directive (BOD) requiring VDPs.

The new directive is the result of a long process that included input from researchers, academics, and other private-sector experts, a first for this kind of directive. BOD 20-01 spells out exactly what the agencies’ policies must include and how the agencies will be required to work with vulnerability reporters. While the directive provides a detailed timeline for when agencies must hit specific milestones--VDPs must be published by March 1, 2021--and includes guidance on how to work with researchers and how to track metrics associated with bug reports and remediation, it does not give agencies any direction on how to do the prep work needed to be ready for the influx of reports that will happen once these policies are public. For many organizations, that step can be the most difficult and painful one in the entire process, and an organization that is not properly prepared may find itself buried with vulnerability reports that it does not have the capacity or capability to handle.

“There’s no provision whatsoever to assess your capabilities before you publish your policy, Would you advertise a 911 service if you not only didn’t have any operators standing by to take the calls, but no firefighters and no hydrants? No, you wouldn’t. But that’s what this is,” said Katie Moussouris, CEO and founder of Luta Security, a firm that specializes in helping organizations create and sustain VDPs.

Moussouris led the development of Microsoft’s first bug bounty program and helped the Department of Defense create its Hack the Pentagon bug bounty contest, which later expanded to other parts of the department. While she’s encouraged by the general idea and direction of the directive, she’s concerned that federal agencies will not be properly prepared to deal successfully with a high volume of vulnerability reports if they do not lay the proper groundwork. In fact, the first entry in the directive’s FAQ section contemplates this possibility: “My agency has published a security contact but we don’t yet have a VDP. What should we do with the reports we receive?”

“That’s completely backward. I’m seriously concerned. Bug bounties have captured the hearts and minds of our government. But running a bug bounty program or a VDP has nothing to do with true security maturity. Just because you have a mechanism to hear from people now doesn’t mean you’re ready to deal with them,” she said.

“It may just mean you’re hearing about more security problems. The problem is you’re conflating a VDP with security maturity. You have to do a maturity assessment, find process gaps and address those first.”

The directive requires that VDPs have to specify which systems are in scope, how to report a vulnerability, and a statement making it clear that bug reports can be anonymous. And perhaps most importantly for researchers, each policy must include a “commitment to not recommend or pursue legal action against anyone for security research activities that the agency concludes represents a good faith effort to follow the policy, and deem that activity authorized.”

"There’s no truth to the idea that you can build this process in parallel while you’re developing a VDP. It’s not achievable."

That requirement is vital given that some software companies and site owners have shown a propensity for suing or threatening to sue researchers who discover vulnerabilities in their products or sites, even when the researchers report them privately to the company.

“It warms my hacker heart to see that there’s official government writing saying that it’s no longer acceptable for federal agencies to close their ears to vulnerability reports and that researchers are your friends,” Moussouris said.

The first milestone in the directive is that every agency must have a security contact for each .gov domain it owns by Oct. 2. The agencies then have until March 1 to create, refine, and publish their VDPs. From then on, the scope of the VDP must increase by at least one new Internet-accessible system every 90 days for two years, at which point all such systems have to be in scope. While some federal agencies have been using private bug bounty platforms to handle vulnerability reports, any agency starting from scratch will have an uphill climb to develop a policy while also creating internal processes and gathering resources to handle the vulnerability reports.

“An agency is going to have limited people, processes, and technology to deal with this. It’s all coming from a finite pool. It’s an isolated thing in the larger area of security. This could take away resources from higher priority things. I would not want an election authority that is facing nation-state actors dealing with this now. This is not the time,” Moussouris said.

"There’s no truth to the idea that you can build this process in parallel while you’re developing a VDP. It’s not achievable. You have to build it to be efficient and sustainable.”

]]>
<![CDATA[UK Says Children's Apps Must Have Built-in Privacy]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/uk-says-childrens-apps-must-have-built-in-privacy https://duo.com/decipher/uk-says-childrens-apps-must-have-built-in-privacy Thu, 03 Sep 2020 00:00:00 -0400

A new statutory code limiting the amount of data online services can collect from children went into effect in the United Kingdom on Sept. 2. Developers have to make sure data protections are available by default in online applications and services used by children or face potentially high fines.

The Age Appropriate Design Code applies to any businesses providing “online services and products” likely to be used by people in the United Kingdom under 18 years of age. That includes educational websites, messaging services, community forums, social media platforms, streaming services with large audiences of children, makers of connected toys (Internet of Things toys), and game companies and platforms. The code outlines 15 standards for developers to follow so that users—children—have a certain level of privacy by default when visiting a website or opening an app.

“\K[ids] are not like adults online, and their data needs greater protection," Information Commissioner Elizabeth Denham told the BBC.

The Information Commissioner’s Office will have the power to fine violators up to 4 percent of their global revenues. Online service providers, app developers, and other relevant businesses have one year to make sure their services and applications are complying with the rules, as enforcement will begin Sept. 2, 2021. The ICO has said it has the power to take more severe actions if necessary.

"The best interests of the child should be a primary consideration when you design and develop online services likely to be accessed by a child," according to the code. Even if the service or device is not explicitly targeted for children, the code’s requirements apply if children are likely to use the service. This expands the type of businesses impacted by the code. For example, streaming services such as Netflix aren’t specifically for children, but provide children’s programming, making the company subject to the rules.

Another thing to consider is the fact that the Children’s Code (as it is also called) defines children as under the age of 18, not 13. This means makers of connected devices such as fitness trackers have to make sure their data policies are compliant if they want to continue selling wearables to teenagers in the UK.

Similar to Europe’s GDPR, the Age Appropriate Design Code will affect businesses outside of the United Kingdom. The code is very clear that it applies to any business with users who are children in the United Kingdom—and in this interconnected world that means any company with any kind of presence in the UK.

Concerns about children’s privacy isn’t just limited to that side of the Atlantic Ocean. Last fall, the United States Federal Trade Commission fined YouTube $170 million for collecting data on children under the age of 13 without the consent of their parents.

The Children’s Code requires developers to take into consideration children’s best interests when designing and developing services, to refrain from using children’s data in ways that are detrimental to their well-being, and to ensure that settings default to high levels of privacy. There are a few specific requirements, such as the fact that geolocation must be switched off by default and children’s data cannot be shared unless there is a compelling reason to do so. Dark patterns in user interfaces—methods designed to trick users into making decisions they otherwise would not have (such as making the opt-out link very small and faint to see on a page)—are not allowed.

“Nudge techniques” should not be used to “lead or encourage children to provide unnecessary personal data or weaken or turn off their privacy protections,” according to the code.

The ICO has said it will provide support to businesses trying to make the necessary changes to comply.

“We want children to be online, learning and playing and experiencing the world, but with the right protections in place,” Denham said in a statement. “A generation from now we will all be astonished that there was ever a time when there wasn’t specific regulation to protect kids online. It will be as normal as putting on a seatbelt.”

]]>
<![CDATA[Gartner Warns CEOs Will be Personally Liable for Breaches by 2024]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/gartner-warns-ceos-will-be-personally-liable-for-breaches-by-2024 https://duo.com/decipher/gartner-warns-ceos-will-be-personally-liable-for-breaches-by-2024 Wed, 02 Sep 2020 00:00:00 -0400

Cyberattacks against connected devices having an impact on the physical world is not yet a commonplace occurance, but are very much in the realm of possibility. Hijacked medical devices may be unable to dispense life-saving drugs, or a connected car could receive instructions to crash itself and potentially injure the human passenger inside.

Changes in the regulatory climate could have CEOs and other senior executives being personally liable for not adequately securing connected systems Gartner said. By 2024, as many as 75 percent of CEOs could be held liable for data breaches if it is found that the incidents occured because the organization did not focus on cybersecurity or invest sufficiently in security, and a security breach or incident led to actual physical consequences, the research firm said in a research note.

“Soon, CEOS won’t be able to plead ignorance or retreat behind insurance policies.”

“Regulators and governments will react promptly to an increase in serious incidents resulting from failure to secure CPSs (cyber-physical systems), drastically increasing rules and regulations governing them,” said Katell Thielemann, a research vice president at Gartner.

Gartner defined cyber-physical systems (CPSs), such as the Internet of Things and operational technology, as systems “engineered to orchestrate sensing, computation, control, networking, and analytics to interact with the physical world, including humans.” The list of CPSs include manufacturing equipment and power grid infrastructure, smart buildings and cities, and connected and autonomous vehicles.

Shift to Physical

Gartner's note focused on the fact that cyberattacks on CPSs could eventually go beyond just online disruptions to leading to human fatalities and property damage. Ransomware, for example, evolved from being a digital annoyance when consumers lost personal documents and photographs to life-threatening attacks capable of crippling hospital operations and disrupting patient care.

The financial impact of CPS attacks resulting in casualties to human life will reach over $50 billion by 2023, Gartner predicted.

“The more connected CPSs are, the higher the likelihood of an incident occurring,” Thielemann said.

Attacks against CPSs will increase drastically over the next few years, but many enterprises are not even aware of the CPSs deployed within their networks, Thielemann said. There could be many reasons, such as legacy systems connected to the network and managed by non-IT teams, or because the system was set up as part of a business-driven initiative without IT involvement. Technology leaders need to help CEOs understand the risks posed by CPSs as well as the necessity of stepping up investments to secure these systems.

“Soon, CEOS won’t be able to plead ignorance or retreat behind insurance policies,” Thielemann said.

Regulatory Change

For Gartner's prediction to come about, there would need to be signficant change in current laws defining criminal penalties and to give regulators sufficient enforcement powers. No one, to date, has gone to jail for security failures. Equifax’s massive data breach affected 143 million consumers, and the company faced lawsuits, regulatory fines, and Congressional scrutiny. Only one person went to jail, but that was the chief information officer, and he went to jail for insider trading that occured after the breach. No one else in senior management was held liable for the mistakes that led to the breach, or how it was handled afterwards. All the major data breaches—Equifax, Target, Marriott, to name a few—of the past few years led to regulatory fines and costly changes to the organization's security program, but not much else.

"If history is any guide, @Marriott’s mega data breach will be treated like all the others: the company will apologize & offer useless credit monitoring to the victims impacted. The status quo isn’t working," Sen. Ron Wyden (D-Oregon) wrote on Twitter back in 2018 after Marriott announced its data breach.

Wyden sought to impose real punishments on companies and their executives with the Consumer Data Protection Act of 2018, which would have sent senior executives—chief executive officers, chief privacy officers, chief information security officers—to jail for security and privacy failures. The proposed legislation called for establishing minimum privacy and security standards for organizations to follow; establishing a national Do Not Track system to allow consumers opt-out of online data collection and sharing; and allowing consumers a way to review data companies have about them and correct inaccuracies. Organizations would have potentially faced fines of up to 4 percent of annual revenue on first offense and 10-20 year criminal penalties for senior executives for not safeguarding privacy or providing adequate security.

"Corporations don't make decisions, people do, but for far too long, CEOs of giant corporations that break the law have been able to walk away, while consumers who are harmed are left picking up the pieces," Warren said when she announced the bill.

Consumers can sue organizations for failing to safeguard personal information, but they are the ones grappling with the long-term impact of having their personal and healthcare data stolen. Organizations move on with their reputations a little dented, but still intact. In fact, CEOs "were more likely to receive an increase in total and incentive pay several years after a security breach," Warwick Business School found in a 2019 study which examined data breaches in the United States between 2004 and 2016.

The Corporate Executive Accountability Act proposed by Sen. Elizabeth Warren (D-Mass) sought "criminal liability for negligent executive officers" of companies if it came out that company actions led to a data breach affecting "personal data of 1 percent of the U.S. population or 1 percent of the population of any state.”

There are clear signs that regulators are paying attention. Last year, the Government Accountability Office said the Federal Trade Commission and Consumer Financial Protection Bureau should be given authority to improve oversight of companies like Equifax and punish them when they violate the public trust.

Just last month, the U.S. District Court in San Francisco announced criminal charges against Joe Sullivan, the former CISO of Uber, for covering up the 2016 data breach at the ride-sharing company which affected 57 million drivers and users. The charges are the first against an executive for actions related to a company's security incident.

Companies aren't going to make changes to hold executives accountable for data breaches. There have been shareholder proposals at Disney and Verizon to connect CEO pay to cybersecurity. The companies recommended voting against the proposals, and that was that. But there are signs things may be changing, and Gartner seems to think that the changes aren't all that far off.

]]>
<![CDATA[Notarized Malware Slips Into Mac App Store]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/notarized-malware-slips-into-mac-app-store https://duo.com/decipher/notarized-malware-slips-into-mac-app-store Tue, 01 Sep 2020 00:00:00 -0400

One of the key components of Apple’s product security strategy is a requirement that developers sign their apps and submit them to Apple for approval and code-scanning before they’re allowed to appear in the iOS or macOS app stores. The idea is to prevent people from mistakenly installing malicious or dodgy apps, but sometimes things still slip through, and recently an app containing a notorious piece of malware found its way into the macOS store and was notarized by Apple.

The malware, known as OSX Shlayer, was carried as a payload as part of an adware campaign that was carried out on a site that was masquerading as the project page for an open source project called Homebrew. Visitors to the fake site were sent through a series of redirects and eventually shown a popup saying that their version of Flash was out of date and they needed to download the new version to proceed. It’s an old tactic used by malicious site operators and exploit kits to trick people into installing malware, and it’s been effective for many years. And this is one of the attack vectors that Apple’s app notarization system is designed to cut off by preventing unsigned and un-notarized apps from being installed.

But in this case the app that downloads is notarized by Apple, meaning victims’ machines will trust it and allow it to run. A visitor to the fake site, Peter Dantini, noticed what was going on and sent the details to Patrick Wardle, a prolific Apple security researcher and principal security researcher at Jamf, who dug in to see what was happening with the downloaded app. Wardle found that the adware downloads and installs four separate packages, comprising the Shlayer malware that targets Macs. Shlayer has been circulating for a while and is known to masquerade as Adobe Flash Player updates. It’s mainly used to serve unwanted ads to victims but can also steal information.

“As far as I know, this is a first: malicious code gaining Apple’s notarization ‘stamp of approval’,” Wardle said in his analysis of the incident.

“In Apple’s own words, notarization was supposed to ‘give users more confidence that [software] …has been checked by Apple for malicious components.’ Unfortunately a system that promises trust, yet fails to deliver, may ultimately put users at more risk. How so? If Mac users buy into Apple’s claims, they are likely to fully trust any and all notarized software. This is extremely problematic as known malicious software (such as OSX.Shlayer) is already (trivially?) gaining such notarization!”

Wardle reported the issue to Apple on Aug. 28, and the company revoked the code-signing certificate for the developer. However, later that day the developer used a new certificate to sign new payloads.

“Both the old and ‘new’ payload(s) appears to be nearly identical, containing OSX.Shlayer packaged with the Bundlore adware. However the attackers’ ability to agilely continue their attack (with other notarized payloads) is noteworthy,” Wardle said.

]]>
<![CDATA[Cisco Warns of Exploits Against IOS XR Flaws]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/cisco-warns-of-exploits-against-ios-xr-flaw https://duo.com/decipher/cisco-warns-of-exploits-against-ios-xr-flaw Mon, 31 Aug 2020 00:00:00 -0400

UPDATE--Cisco is warning customers that attackers are actively targeting two serious, unpatched vulnerabilities in the IOS XR software that runs on many of its routers. The flaws do not allow remote code execution or control of a vulnerable device, but an attacker could use either of the bugs to exhaust the process memory of the device.

The IOS XR operating system runs on a wide range of Cisco routers, including network and edge routers used in enterprises and by service providers. The specific vulnerabilities (CVE-2020-3566 and CVE-2020-3569) Cisco is warning about are in the Distance Vector Multicast Routing Protocol that’s part of IOS XR. Cisco originally issued the advisory for one of the flaws on Aug. 29 and on Monday the company updated it to reflect the discovery of a second bug in the DVMRP implementation in IOS XR.

“These vulnerabilities are due to insufficient queue management for Internet Group Management Protocol (IGMP) packets. An attacker could exploit these vulnerabilities by sending crafted IGMP traffic to an affected device. A successful exploit could allow the attacker to cause memory exhaustion, resulting in instability of other processes. These processes may include, but are not limited to, interior and exterior routing protocols,” the Cisco advisory says.

“These vulnerabilities affect any Cisco device that is running any release of Cisco IOS XR Software if an active interface is configured under multicast routing.”

Cisco does not have patched versions of IOS XR available yet and there are not any workarounds for the bugs, but there are a number of mitigations customers can implement to lower the risk of exploitation. The baseline mitigation is to implement a rate limit for the volume of IGMP traffic coming into an affected router. The use if rate limiting doesn’t prevent exploitation of the vulnerability, but it increases the amount of time it takes an attacker to exhaust the target device’s memory.

“As a second line of defense, a customer may implement an access control entry (ACE) to an existing interface access control list (ACL). Alternatively, the customer can create a new ACL for a specific interface that denies DVMRP traffic inbound on that interface,” the Cisco advisory says.

Also, Cisco recommends that customers disable IGMP routing on interfaces where IGMP processing isn’t needed. The company discovered the vulnerabilities during a customer support engagement.

"On August 28, 2020, the Cisco Product Security Incident Response Team (PSIRT) became aware of attempted exploitation of these vulnerabilities in the wild," Cisco said in the advisory.

_This article was updated on Sept. 1 to include information on the second vulnerability. _

]]>
<![CDATA[Bug Allows Theft of Local Files Via Safari]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/bug-allows-theft-of-local-files-via-safari https://duo.com/decipher/bug-allows-theft-of-local-files-via-safari Wed, 26 Aug 2020 00:00:00 -0400

After being frustrated with Apple’s long timeline for issuing a patch, a security researcher has released details of a bug in Safari on both iOS and macOS that allows an attacker to extract sensitive information from a victim’s machine through the web share API in the browser.

The API is designed to allow individuals to share content from their browsers through other apps, such as email or messaging apps. Security researcher Pawel Wylecial discovered that the API has some odd behavior that enables an attacker to hide some functionality from the victim, specifically the ability to share a file without the victim’s knowledge.

“The problem is that file: scheme is allowed and when a website points to such URL unexpected behavior occurs. In case such a link is passed to the navigator.share function an actual file from the user file system is included in the shared message which leads to local file disclosure when a user is sharing it unknowingly,” Wylecial said in a post on the vulnerability.

After looking into the behavior, Wylecial found that by creating a specially designed website with the web share API enabled, he could extract a file such as the password file from the victim’s machine and share it if the victim clicked on the share link. For example, if the victim chooses to share the link via the Messages app on macOS, the attachment in the window that opens has no file name, so the victim would not immediately realize what content was being shared.

He also found that he could grab a victim’s browsing history from Safari on iOS using the same vulnerability.

“I thought about a more useful scenario on how this bug could be used to extract sensitive information as a passwd file is only good for demonstration. It had to be something accessible from Safari app so browser history seemed like a good candidate to exfiltrate. In order to achieve that we only needed to change the url value to the following: file:///private/var/mobile/Library/Safari/History.db,” he said.

Wylecial discovered the issue in April and reported it to Apple on April 17. The company acknowledged the report a few days later and said it would investigate. But after a few weeks of communication, Wylecial said that Apple stopped replying to his requests for status updates. In early August Wylecial informed Apple that he planned to disclose the bug on Aug. 24, and a few days later Apple asked Wylecial to delay his disclosure because the company planned to fix the issue in its spring 2021 security update. Wyclecial replied that “waiting with the disclosure for almost an additional year, while 4 months already have passed since reporting the issue is not reasonable”.

He disclosed the vulnerability on Monday and on Tuesday an Apple engineer committed a patch for the issue to the WebKit project, the framework on which Safari is built. Wylecial said he has not had a chance to analyze the patch yet and has not heard anything more from Apple since disclosing the flaw.

]]>
<![CDATA[CISA Releases 5G Security Strategy]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/cisa-releases-5g-security-strategy https://duo.com/decipher/cisa-releases-5g-security-strategy Wed, 26 Aug 2020 00:00:00 -0400

The United States Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) has released the National Strategy to Secure 5G for securely deploying 5G in the United States.

5G is making a lot of promises. The latest generation of cellular mobile communications is expected to provide near-instantaneous connectivity at higher data rates and low latency necessary to support technologies such as virtual and augmented reality, autonomous vehicles, and smart cities.

“5G networks and future communications technologies (e.g., SDN, network slicing, edge computing) will transform the way we communicate, introducing a vast array of new connections, capabilities, and services. However, these developments introduce significant risks that threaten national security, economic security, and impact other national and global interests,” CISA said.

Telecommunications providers around the world are gearing up for the change, but there are concerns 5G would create new security threats and exacerbate existing ones. CISA’s strategy guide is intended to provide recommendations on deploying secure and resilient 5G networks.

The National Strategy to Secure 5G outlined four defined lines of effort and five strategic initiatives to implement that strategy. The lines of effort are Facilitate Domestic 5G Rollout; Assess Risks to & Identify Core Security Principles of 5G Infrastructure; Address Risks to United States Economic and National Security During Development and Deployment of 5G Infrastructure Worldwide; and Promote Responsible Global Development and Deployment of 5G.

The five strategic initiatives are centered around developing 5G policy and standards by emphasizing security and resilience; increasing awareness on 5G supply chain risks and promoting security measures; securing existing infrastructure to support future 5G deployments; encouraging innovation to foster trusted 5G vendors; and analyzing use cases and sharing risk management strategies. Each initiative has its own set of objectives.

“Each of the strategic initiatives address critical risks to secure 5G deployment, such as physical security concerns, attempts by threat actors to influence the design and architecture of the network, vulnerabilities within the 5G supply chain, and an increased attack surface for malicious actors to exploit weaknesses,” CISA said.

The goal is to deploy 5G networks which are secure and resilient so that threat actors won’t be able to attack the network architecture. The problem is that for the short-term, 5G will be rolled out on non-standalone networks and will co-exist with older communications technologies. This means the legacy vulnerabilities associated with 4G LTE can still impact 5G networks, even though 5G was designed with some security defenses. The transition to standalone 5G networks should take place within several years.

Supply chain remains a big issue, since adversaries can weaken the network by injecting compromised components such as counterfeit parts and malicious software and hardware into the supply chain. Supply chain issues can also arise from poor designs, manufacturing processes, and maintenance procedures.

Whoever controls the equipment will control the networks, so it is an integral part of the national strategy to encourage innovation, to ensure there are enough vendors in the marketplace to have healthy competition.

“This defensive strategy is about the ‘nodes,’ the devices and their applications, in the network rather than merely the ‘links,’" William Hugh Murray, a member of the SANS Institute editorial board, said in the institute’s news summary. Much of the previous discussion of 5G networks were drive by the carriers, so the questions were always about connectivity. The strategy is shifting the responsibility to the developers and managers since the issues are about applications and devices. “These are the responsibility of the developers and managers of the applications, not the carriers.”

]]>
<![CDATA[Medical Data Leaks Linked to Hardcoded Credentials in Code]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/medical-data-leaks-linked-to-hardcoded-credentials-in-code https://duo.com/decipher/medical-data-leaks-linked-to-hardcoded-credentials-in-code Mon, 24 Aug 2020 00:00:00 -0400

Data of more than 150,000 to 200,000 patient were exposed in at least nine GitHub repositories—the result of improper access controls and hardcoded credentials in source code, according to a DataBreaches.net.

Jelle Ursem, a security researcher from the Netherlands, worked with DataBreaches.net after finding credentials to databases and services containing healthcare data in public GitHub repositories. The impacted organizations included medical clinics and hospitals, billing services companies, and other service providers.

Ursem searched on GitHub and found repositories with hardcoded credentials for systems such as databases, Microsoft Office O365, or a Secure File Transfer Host (SFTP). Ursem was able to use those credentials to directly access the systems to see the patient data.

“Once logged in to a Microsoft Office365 or Google G Suite environment, Ursem is often able to see everything an employee sees: contracts, user data, internal agendas, internal documents, emails, address books, team chats, and more,” the report said.

The title of the report is a very good reminder of what Ursem did: No Need to Hack When It’s Leaking. Ursem used valid credentials to log in to the services. There were some common mistakes that allowed Ursem access to the medical data. Developers had embedded hard-coded login credentials in code rather than using a separate configuration file on the server. In the case of email accounts or other online services, two-factor authentication was not enabled. At least one case was of an abandoned repository—the organization no longer needed the data but had kept the repository around instead of deleting it.

One database from a major regional clinic contained 1.3 million records. That database was exposed because Ursem was able to find the URL to the admin console of the electronic health record system being used.

For a software and services consulting company, a developer had included system credentials in code committed to a public repository. Ursem was able to eventually gain access to the vendor’s billing back offices, including data associated with nearly 7,000 patients and over 11,000 health insurance claims. It is unclear whether this data leak was ever reported to the Department of Health and Human Services.

“[Hackers] can find a large number of records in just a few hours of work, and this data can be used to make money in a variety of ways,” the report said.

Developers need to be reminded—and trained—to not embed credentials or access tokens in code which gets posted to public repositories. GitHub and other code-sharing repositories have introduced built-in scans and checks to help detect when credentials are being committed in code, but it is an ongoing problem. There should be regular security audits of all code to catch instances when mistakes are made. Organizations also don’t have to default to having public repositories—where the code can be viewed by anyone—if there is no business need for publicly accessible code.

The report has more details about the kind of mistakes the developers made as well as gaps in the security audits. It is a starting point for healthcare organizations to understand what processes they need to change and the types of mistakes to look for.

According to the report, Ursem struggled to notify the nine impacted organizations because there was no way to contact them or because they didn’t respond. Organizations need to make sure there is a clear reporting path so that they can find out when code is improperly exposed. That could be posting a public email address that is monitored regularly or providing customer support teams with a clear escalation path when these reports are made. Organizations also need to make sure their partners and contractors also know how to handle these attempts.

“[At] least three of the nine entities intentionally did not respond to early notification attempts and would later claim that they had been fearful the notifications were a social engineering attack. Their failure to respond left PHI exposed even longer,” the report said.

Data leaks because they were stored online with insufficient controls are getting increasingly common. In some cases, it was the case of the organization not configuring the cloud servers properly or not realizing that access controls were missing. In many cases, the leaks were the result of organizations not realizing how their third-party suppliers and contractors handled their data. Organizations should be asking their suppliers and contractors for audits to make sure their partners are also properly locking down how they use code repositories.

Administrators should routinely search GitHub for their firm’s name and domain names to see what comes up.

“[Even] if you do not use a developer, one of your business associates or vendors might,” the report said.

]]>
<![CDATA[Serious DoS Bug Patched in BIND 9]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/serious-dos-bug-patched-in-bind-9 https://duo.com/decipher/serious-dos-bug-patched-in-bind-9 Mon, 24 Aug 2020 00:00:00 -0400

Several recent versions of the BIND name server are vulnerable to a remotely exploitable buffer overflow flaw that can cause the server to crash repeatedly, resulting in a denial of service. The vulnerability, along with several other less-serious ones, have been fixed in updated versions of BIND.

The DoS vulnerability (CVE-2020-8620) affects BIND 9.16.1 through 9.17.1 and it’s easily exploitable without any authentication. The bug doesn’t allow remote code execution or any privileged access to the BIND server, but could be used to knock the target server offline.

“An assertion failure exists within the Internet Systems Consortium's BIND server, versions 9.16.1 through 9.17.1 when processing TCP traffic via the libuv library. Due to a length specified in a callback for the library, flooding the server's TCP port used for larger DNS requests (AXFR) can cause the libuv library to pass a length to the server which will violate an assertion check in the server's verifications,” an advisory on the vulnerability by Emanuel Almeida of Cisco Systems, who discovered the vulnerability, says.

“This assertion check will terminate the service resulting in a denial of service condition. An attacker can flood the port with unauthenticated packets in order to trigger this vulnerability.”

BIND is the most widely deployed DNS name server on the Internet and is used in a huge variety of organizations, including enterprises, government agencies, and others. The Internet Systems Consortium, which maintains BIND, has released versions 9.16.6 and 9.17.4 to fix this issue and said it is not aware of any active exploits against the vulnerability.

Those new versions of BIND also contain patches for several other vulnerabilities, two of which can be used to crash a target server. Those two vulnerabilities are similar, but have different attack vectors. One of the flaws (CVE-2020-8622) can be exploited in two different ways, but with the end result in both cases being that the server exits.

“An attacker on the network path for a TSIG-signed request, or operating the server receiving the TSIG-signed request, could send a truncated response to that request, triggering an assertion failure, causing the server to exit,” the BIND advisory says.

“Alternately, an off-path attacker would have to correctly guess when a TSIG-signed request was sent, along with other characteristics of the packet and message, and spoof a truncated response to trigger an assertion failure, causing the server to exit.”

The other vulnerability only affects BIND servers that are configured with both the “forward first” and QNAME minimization options enabled.

“While query forwarding and QNAME minimization are mutually incompatible, BIND did sometimes allow QNAME minimization when continuing with recursion after 'forward first' did not result in an answer. In these cases the data used by QNAME minimization might be inconsistent, leading to an assertion failure, causing the server to exit.” the advisory says.

]]>
<![CDATA[EU Delays GDPR Decision in Twitter Case]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/eu-delays-gdpr-decision-in-twitter-case https://duo.com/decipher/eu-delays-gdpr-decision-in-twitter-case Fri, 21 Aug 2020 00:00:00 -0400

Irish privacy regulators are still trying to finalize the decision over how Twitter handled a security incident in 2018. The delay is because the final details still have to be hammered out with privacy regulators from other European Union countries.

The Irish Data Privacy Commission has been investigating the security incident where a bug in the Twitter Android app made some users’ protected tweets public. The case against Twitter alleged the company did not report the breach within 72 hours, which violated EU’s General Data Protection Regulation. The investigation was completed earlier in the year, and the Irish regulators submitted a draft decision to other EU data protection authorities in May.

Under the EU’s General Data Protection Regulation, a regulator from one country takes the lead role in privacy cases that span across borders. However, before issuing a final decision, the main regulator has to share its draft decision with other EU regulators that could claim jurisdiction and take their feedback into consideration.

Twitter, like many other tech companies, have their European headquarters in Dublin, which is why the Irish Data Privacy Commission is its lead privacy regulator in the EU. Since the security incident involved other European citizens, this was a cross-border case.

“A number of objections were raised,” the Irish regulator said in a brief statement. “However, following consultation a number of objections were maintained and the (Irish Data Privacy Commission) has now referred the matter to the European Data Protection Board.”

The European Data Protection Board is an independent body representing the bloc’s privacy regulators. The EDPB has one month to broker a two-thirds majority among member states, and one month after that to reach an absolute majority. If all that fails, the chair of the board will cast the deciding vote. It may be November before Twitter learns its fate.

GDPR gives regulators a lot of authority on the penalties and enforcement actions, as well as increasing the total monetary fines. Companies that don’t disclose breaches and incidents in a timely manner can be fined up to 10 million Euros ($12 million) or 2 percent of a company’s annual revenue, whichever is higher.

Twitter reported revenues of $3.46 billion in 2019, which means a potential fine could be as high as $69 million.

The ruling will be the first involving a U.S. technology company since GDPR took effect in 2018. Ireland is currently working through complaints against Apple, Facebook, Google, and LinkedIn. WhatsApp sharing user data with Facebook is one of the cases the Irish regulators have been investigating.

The statement did not specify what kind of objections the regulators in other countries raised. The question at the heart of the case was not about specific business practices such as data mining and storage, but specifically over breach reporting.

The disagreement and wrangling over Twitter’s case may be a hint of what will happen with two dozen or so other investigations the Irish watchdog is trying to wrap up involving U.S. technology companies. GDPR’s effectiveness can be weakened if all cases take this long to work through the system.

There has been some discussion about whether the GDPR needs to be fixed, or modified, and the slow speed of enforcement is one of the biggest questions. While it is important for privacy regulators to be very deliberate in its rulings, to avoid having cases be tied up in appeals for years, it is also problematic if companies don't actually see any enforcement actions under the law. There are also concerns that having a single regulator take the lead on cross-border cases may not be the ideal scenario, especially in a situation where a single country is carrying a large number of cases.

]]>
<![CDATA[GDPR Lawsuit Targets Oracle, Salesforce Use of AdTech Cookies]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/gdpr-lawsuit-targets-oracle-salesforce-use-of-adtech-cookies https://duo.com/decipher/gdpr-lawsuit-targets-oracle-salesforce-use-of-adtech-cookies Thu, 20 Aug 2020 00:00:00 -0400

A consumer privacy campaign group has filed a lawsuit against Salesforce and Oracle for allegedly violating the European Union’s General Data Protection Regulation over the companies' use of data collected by third-party cookies.

The lawsuit from the Privacy Collective claimed that Oracle and Salesforce were collecting personal data without proactive user consent and then selling the information to other companies via an auction without users’ knowledge. The Privacy Collective filed the class-action lawsuit in Amsterdam and plans to file another in London later this month.

The crux of the lawsuit focuses on how companies share information about internet users through “real-time bidding,” an auction process used in online advertising to dynamically determine wich ads get displayed to users. When a user visits a page, the publisher offers the advertising space to advertisers in an auction, and provides information about the user to help advertisers decide how much to bid. Hundreds of advertisers take part in the auction and can access that information. The personal data—frequently collected via third-party cookies and other tracking technologies—may include location, device identifiers, and general demographics such as gender and age of the potential viewer. Only the auction winner’s ad will be displayed to the internet user, but all the advertisers taking part can see the data. The auction and bidding happens in milliseconds, hence the name “real-time” bidding.

“There’s a lot of conduct going on behind the scenes that the average internet user has no knowledge of,” Christiaan Alberdingk Thijm, co-founder of Bureau Brandeis, the firm representing the claimants, told DutchNews.nl. “They have all this information and you are put into a certain ‘audience’. On the basis of this shadow identity they will ensure you see, read, listen to and buy for a certain price what they think is fit for you.”

The Privacy Collective alleged that Oracle and Salesforce used third-party cookies Bluekai and Krux to misuse consumers’ personal data. The cookies are used for dynamic ad pricing services and can be found on a range of websites including Amazon, Booking.com, Dropbox, Ikea, Reddit, and Spotify. Oracle acquired BlueKai in 2014 and Salesforce acquired Krux in 2016.

Under GDPR, organizations must obtain explicit—freely given, specific, informed, and unambiguous—consent to place cookies on user devices. Privacy groups claim that advertisers do not properly obtain consent to place cookies and other tracking technologies to collect personal data for use in RTB. The mere fact that the information is broadcast to so many other advertisers is the opposite of privacy by default, and the opposite of what is meant when users are asked to give informed consenst on how their information is used.

Many companies use the information to build a profile of the user (male in 30s living in an urban area who is married and likes to go hiking), which goes beyond just individual data points, as it reveals preferences and online activities. All the information can eventually be linked into a universal profile per consumer. The Privacy Collective claimed that the use of third-party cookies and RTB result in unlawful processing of users’ personal data without proper consent. Users cannot avoid having their information being compiled into a profile, and cannot control how the personal details are being used.

This global ID practice is illegal under GDPR, and this lawsuit is bringing it to light," Dutch data management company Relay42 wrote. "It's about more than just operating outside of consumer expectations or understanding—consumers flat-out have not given permission for their data to be used in this way.

“Salesforce disagrees with the allegations and intends to demonstrate they are without merit. Our comprehensive privacy program provides tools to help our customers preserve the privacy rights of their own customers,” Salesforce told the Dutch news site.

“As Oracle previously informed the Privacy Collective, Oracle has no direct role in the real-time bidding process, has a minimal data footprint in the EU, and has a comprehensive GDPR compliance program,” Dorian Daley, Oracle executive vice president and general counsel, said in a statement.

Earlier this month, a group of Congressional lawmakers urged the Federal Trade Commission to look into whether real-time bidding violated federal laws barring unfair and deceptive business practices.

“The significance of the ruling, when it comes, cannot be overstated” Elizabeth Kilburn, an associate in the law firm Wedlake Bell, wrote for Computer Business Review. Enterprises need to consider the role they play in adtech and their use of RTBs. Companies should “review processes, systems and documentation” relating to adtech and assess what special categories of personal data are being processed in connection with RTB, she recommended.

]]>