Facial recognition technology is still in its infancy, but it is beginning to experience the problems, awkwardness, and growing pains normally associated with adolescence.
Recently, Microsoft President Brad Smith made a public appeal for the federal government to establish some regulation of facial recognition systems, a technology that Microsoft itself sells. Smith said that while the technology is sophisticated, it can be used for many different purposes and when it makes mistakes, those errors can be catastrophic.
“It seems especially important to pursue thoughtful government regulation of facial recognition technology, given its broad societal ramifications and potential for abuse. Without a thoughtful approach, public authorities may rely on flawed or biased technological approaches to decide who to track, investigate or even arrest for a crime,” Smith said.
Smith’s comments now seem prophetic in light of a new study conducted by the American Civil Liberties Union, which tested the accuracy of Amazon’s Rekognition facial recognition system. The ACLU used Rekognition to see how accurately it could identify the photos of every current member of Congress when compared against a database of 25,000 photos of people who have been arrested, using the default search settings in Amazon’s software. The system mis-identified 28 members of Congress as people in the mugshot dataset, an error rate of about five percent.
“Matching people against arrest photos is not a hypothetical exercise. Amazon is aggressively marketing its face surveillance technology to police, boasting that its service can identify up to 100 faces in a single image, track people in real time through surveillance cameras, and scan footage from body cameras,” Jacob Snow, an attorney at the ACLU, wrote in a post on the study’s results.
"People aren’t properly skeptical of reports they get from these systems and that can end in real tragedy if the tech doesn’t work.”
One of the main criticisms of facial recognition systems is that they’re just not accurate enough to be used in critical applications, such as identifying criminal suspects. While these systems are adept at identifying white people, especially men, they’re much less accurate in identifying people of color. A study published earlier this year by researchers from MIT and Microsoft found that Microsoft’s facial recognition system had an error rate of 21 percent in identifying women of color and another vendor’s system failed nearly 35 percent of the time at that task.
False positives can have serious consequences in many security systems, but for facial recognition systems a misidentification can have severe repercussions.
“Can law enforcement just do a phishing expedition on facial data now? It doesn’t work very well, especially for people of color and women. We know that societal biases are already being replicated in the way these systems work,” EFF Executive Director Cindy Cohn said in an interview with Decipher last week.
“People over-rely on information they get from computers. It’s also true for law enforcement. People aren’t properly skeptical of reports they get from these systems and that can end in real tragedy if the tech doesn’t work.”
Cohn said she doesn’t envision any government regulation of these systems in the near future, but the ACLU’s Snow said Congress should consider halting the use of facial recognition for the time being.
“This technology shouldn’t be used until the harms are fully considered and all necessary steps are taken to prevent them from harming vulnerable communities,” Snow said.