We use cookies to give you the best personal experience on our website. If you continue to use our site without changing your cookie settings, you agree we may place these cookies on your device. You can change your cookie settings at any time but if you do , you may lose some functionality on our website . More information can be found in our privacy policy.
Please provide more information.
Stylus no longer supports Internet Explorer 7, 8 or 9. Please upgrade to IE 11, Chrome, Safari, Firefox or Edge. This will ensure you have the best possible experience on the site.
Brief Published: 19 Jun 2020

Tech Companies Pause Facial Recognition Development

Extra
Black Lives Matters demonstrations have forced tech companies to reassess facial recognition tools

Leading tech companies are ceasing development of facial recognition tools and blocking police forces from implementing them. It’s a necessary move – but experts warn these measures aren’t drastic enough. As we’ve covered before on Stylus, current systems perpetuate systemic racism against Black people and ignore transgender individuals.

Facial recognition has become a key issue during recent Black Lives Matter demonstrations, as police forces deploy these tools for mass surveillance and to identify (or misidentify) “unruly” protestors. In response, IBM is ending the manufacture and use of its facial recognition tech, while Microsoft has paused selling its own to police forces until federal oversight of the tools improve. And Amazon announced a yearlong hiatus on police deployment of its controversial (and notoriously inaccurate) Rekognition application.

While halting facial recognition development signals a small step in the right direction, it also illustrates how Black Lives Matter protests have pushed large corporations to address how they perpetuate the status quo. An attitude that’s likely to become more relevant, as 68% of Americans now say they expect chief executives to address racial inequality (Morning Consult, 2020).

Companies that have benefitted from racially biased facial recognition can prove their reformation by funding efforts that advance racial justice in the tech industry, as MIT researcher and ethical technology advocate Joy Buolamwini urged in a Medium post responding to IBM’s move.

In the meantime, the onus falls on local and federal government to implement legislation that regulates facial recognition. As illustrated in the influential Gender Shades project by Buolamwini and researcher on Google’s ethical AI team Timnit Gebru, the tech consistently fails to identify Black individuals – and when it does, it’s more likely to misgender or misattribute them. In legal situations, this can result in police charging the wrong people with crimes – see the recent case of a Black New Yorker who was misidentified as a robber by facial recognition tools at multiple Apple stores across the US.

Advocacy groups like the American Civil Liberties Union and the US division of Amnesty International, have argued for the removal of facial recognition tech from areas like airport security and public surveillance. These measures directly target gaps in Amazon and Microsoft’s announcements, which do not restrict use by the US Department of Defense or ICE (Immigration and Customs Enforcement).   

Look out for our upcoming episode of Stylus’ Future Thinking podcast, which will further explore algorithmic bias with artificial intelligence policy adviser Mutale Nkonde.

PANTONE®TPX
COATED
RAL
RGB
HEX
NCS