Amazon is under fire after the American Civil Liberties Union performed an experiment using Amazon’s facial recognition software. The ACLU found that it incorrectly identified 28 members of Congress as different people who had previously been arrested.
When we reached out to Amazon for its response to a Thursday report from the ACLU about its Rekognition software, a spokesperson provided a statement saying that its software is designed to be used in conjunction with humans, and that the ACLU used the wrong settings for its study.
Amazon’s full statement has been reproduced below:
“We have seen customers use the image and video analysis capabilities of Amazon Rekognition in ways that materially benefit both society (e.g. preventing human trafficking, inhibiting child exploitation, reuniting missing children with their families, and building educational apps for children), and organisations (enhancing security through multi-factor authentication, finding images more easily, or preventing package theft). We remain excited about how image and video analysis can be a driver for good in the world, including in the public sector and law enforcement.”
“With regard to this recent test of Amazon Rekognition by the ACLU, we think that the results could probably be improved by following best practices around setting the confidence thresholds (this is the percentage likelihood that Rekognition found a match) used in the test. While 80% confidence is an acceptable threshold for photos of hot dogs, chairs, animals, or other social media use cases, it wouldn’t be appropriate for identifying individuals with a reasonable level of certainty. When using facial recognition for law enforcement activities, we guide customers to set a threshold of at least 95% or higher.”
“Finally, it is worth noting that in real world scenarios, Amazon Rekognition is almost exclusively used to help narrow the field and allow humans to expeditiously review and consider options using their judgement (and not to make fully autonomous decisions), where it can help find lost children, restrict human trafficking, or prevent crimes.”
ACLU attorney Jacob Snow responded in a statement:
“Amazon seems to have missed, or refuses to acknowledge, the broader point: facial recognition technology in the hands of government is primed for abuse and raises significant civil rights concerns. It could allow – and in some cases has already enabled – police to determine who attends protests, ICE to continuously monitor immigrants, and cities to routinely track their own residents, whether they have reason to suspect criminal activity or not. Changing the threshold from 80 to 95 per cent doesn’t change that. In fact, it could exacerbate it.
“In addition to remaining silent on these very real concerns that members of Congress, community groups, and their own employees, shareholders, and consumers have raised repeatedly, Amazon is acknowledging that Rekognition – a product Amazon markets to law enforcement – can and will misidentify people by default. That’s downright dangerous, and there’s more:
• Amazon makes no effort to ask users what they are using Rekognition for. Instead, the tool sets one default: the same 80% we used in running our test.
• Amazon’s website, right now, recommends an 80% confidence for recognising human faces. The attached image shows that Amazon is recommending an 80% confidence score for ‘Face-Based User Verification.’ If an 80% threshold is not ‘appropriate for identifying individuals with a reasonable level of certainty,’ why is Amazon encouraging customers to use that confidence level for recognising human faces?
“This just reinforces that Amazon needs to take greater responsibility for Rekognition. And Congress needs to put on the brakes and enact a moratorium on law enforcement use of the face surveillance.”
NOW WATCH: Tech Insider videos
Business Insider Emails & Alerts
Site highlights each day to your inbox.