After the Internal Revenue Service (IRS) halted a plan to verify taxpayer identities using a third-party facial recognition software platform called ID.me, senators and privacy experts alike are expressing concerns around how government agencies overall are using biometrics technology.
The IRS previously took the brunt of criticism from lawmakers like Reps. Ted Lieu and Yvette Clark for a lack of transparency around its partnership with ID.me, which provides identity proofing, authentication, and group affiliation verification for government agencies and businesses, according to its website. However, while the IRS has since reversed the use of ID.me's facial recognition software for verification purposes, the Department of Labor, the Social Security Administration and up to 27 states reportedly still have contracts in place with the company for facial verification purposes.
In a letter to Department of Labor Secretary Martin Walsh, Sen. Ron Wyden (D-Ore.) this week pushed back against the Department of Labor's use of ID.me, citing privacy and civil liberties concerns, but also scrutiny over the outsourcing of core technology infrastructure to the private sector. Meanwhile, a letter from 45 privacy groups, ranging from the American Civil Liberties Union (ACLU) to the Electronic Frontier Foundation (EFF), called for federal and state agencies overall to halt the use of facial recognition identity verification services, stressing that the “third-party technology should not be forced upon individuals by government agencies.”
Caitlin Seeley George, campaign director with Fight for the Future, said she is seeing an ongoing swell of pushback against facial recognition technology being used in both the public and private sectors.
“This is part of a larger trend we’ve seen over the past few years as facial recognition has spread,” said George. “This technology has popped up in people’s daily lives at restaurants, stores, banks and workplaces. As the uses have been spreading, people have been more aware and concerned about the impact it has on the privacy of their sensitive biometric data.”
Biometrics identification technology platforms are used widely in use cases that range from storefront surveillance for advertising purposes to Facebook's (discontinued) functionality to identify people who appeared in albums and suggest users “tag” them. Law enforcement organizations and federal agencies have also used the technology to assist in criminal investigations. A Government Accountability Office (GAO) report from July found that out of 42 federal agencies surveyed that employ law enforcement officers, six reported using facial recognition technology to help identify people suspected of violating the law during the civil unrest following the death of George Floyd in 2020, while three acknowledged using it on images of the U.S. Capitol attack on Jan. 6, 2021.
Unlike credit card data or passwords, the data that facial recognition systems identify can't be replaced, leaving privacy advocates unsettled that the technology is being adopted so rapidly by both government agencies and private companies with little oversight.
“Right now, facial recognition is the wild west,” said Albert Fox Cahn, founder and executive director with the Surveillance Technology Oversight Project ( S.T.O.P.). “We are seeing growing reports of the technology being used, but there’s no comprehensive directory of what law enforcement agencies use it. Even more alarmingly, when police do use the technology, they are free to make it up as they go along, writing their own rules about how to use facial recognition, with no oversight by elected officials, the courts, or the public.”
“Right now, facial recognition is the wild west."
End user consent for data collection - a large piece of major data privacy regulations like GDPR - poses an array of unique challenges for facial recognition. While storefronts can post a notice alerting consumers to the use of facial recognition services when they enter the premises, George said “it’s an unfair expectation that people would just be able to opt out of being in a situation where their biometric information is scanned," especially if facial recognition is increasingly being used in areas like hospitals or airlines.
“This was a critical piece of the IRS situation with ID.me, that people didn’t have another option,” said George. “When it comes to consent, we’re concerned about the onus that it puts on the individual to understand what they’re consenting to, and the realities of people actually giving consent.”
Another issue is how this data, once collected, is securely maintained and whether it is being shared with other organizations. In its July report, GAO found that while some agencies (like the Federal Bureau of Prisons or Department of Veterans Affairs Police Service) owned their own facial recognition systems, 14 agencies (including the U.S. Immigration and Customs Enforcement, Internal Revenue Service’s Criminal Investigation Division, U.S. Capitol Police and Drug Enforcement Administration) used another entity’s systems, opening them up to third-party privacy and accuracy-related risks.
“It is concerning that so many state and federal government agencies have outsourced their core technology infrastructure to the private sector,” said Wyden in his letter to Walsh this week. “Quite simply, the infrastructure that powers digital identity, particularly when used to access government websites, should be run by the government, and certainly not a company with a track record of misleading the public.”
At the heart of privacy concerns is how facial recognition technology is being used, including surveillance or law enforcement purposes. Part of this concern is the potential racial biases of such systems; these types of inaccuracies in facial recognition algorithms can lead to disturbing scenarios, including mistaken arrests, privacy experts have pointed out.
Beyond misidentifications and inaccuracies, “facial recognition is biased, broken, and when it does work, it’s a threat to democracy,” said Cahn. “When facial recognition is accurate, it becomes the perfect tool of authoritarianism, a way to track tens of thousands of people through a single image. A way to cheaply and easily track our movements, whether it be where we protest, where we pray, or pursue nearly any other aspect of life.”
“When it comes to consent, we’re concerned about the onus that it puts on the individual to understand what they’re consenting to, and the realities of people actually giving consent.”
Overall, privacy experts say they are seeing broader wave of recent pushback against invasive use cases of biometrics even beyond this week’s efforts against ID.me, including a group of senators writing to Secretary of Homeland Security Alejandro Mayorkas asking for support in stopping federal agencies from facial recognition tools like Clearview AI, and the Texas attorney general this week filing a lawsuit against Facebook parent Meta for its (now discontinued) use of facial recognition technology.
A previous pushback in 2020 against facial recognition had a large impact on several big tech companies, with Amazon placing a one-year moratorium on police use of its facial recognition tool, Microsoft saying it would stop selling its facial recognition technology to police department until federal regulations exist, and IBM completely discontinuing its facial recognition system.
“The IRS-ID.me incident was major, and it helped prompt one of the strongest reactions we’ve seen yet on a federal level, which explains why the IRS quickly realized they made the wrong decision,” said Sean Vitka, senior policy counsel for Demand Progress. "Facial recognition platforms are simply not consistent with the public good. These are not tools that have been used for good in history."
While a comprehensive federal regulation for facial recognition technology has yet to exist, lawmakers have made previous efforts to curb invasive biometrics. The Biometric Information Privacy Act (BIPA), passed by the state of Illinois in 2008, is an effort to regulate private organizations’ collection, storage and use of biometric information. The Facial Recognition and Biometric Technology Moratorium Act of 2021, proposed in June, meanwhile, aims to go a step further by giving Congress further oversight of facial recognition technology. Under the proposed bill, federal agencies would only be allowed to use facial recognition technologies after Congress gives explicit authorization. The bill would also give individuals “aggrieved by a violation of these restrictions” the right to sue.
While such efforts like BIPA have been helpful in calling attention to privacy issues and putting a price on the misuse of facial recognition, privacy advocates like Fight for the Future’s George stressed that outright bans of facial recognition by cities and states will have the biggest impact. So far, more than 20 such bans have been passed by cities and states across the country, including ones in Vermont, Virginia and Maine. And while many of these local bans have been targeted at government or law enforcement use, Portland, Oregon in 2020 prohibited the use of facial recognition by private actors in places of public accommodation.
“Bans and moratoria are the cleanest way to protect people and their rights,” said George. “Especially through the pandemic we’ve seen companies pushing facial recognition as a tool of safety and convenience was part of that messaging. Now, people are just realizing that this isn’t about convenience. It feels uncomfortable. People are pushing back, and there’s a growing opportunity to stop this technology from spreading.”