You have 2 free stories left this month.

IBM no longer offers facial recognition technology in pursuit of justice and racial equity.

Machines reflect the “coded gaze” — priorities, preferences, and prejudices — of those who train them.

“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”

— Arvind Krisna (IBM CEO)

This Orwellian technology needs to be reformed. AI fueled automation helps determine who is fired, hired, promoted, granted a loan or insurance, and even how long someone spends in prison. State and local police departments have their own advanced facial recognition systems, which are largely unregulated and have been proven to disproportionately affect racial and ethnic minorities.

As of 2016, one in two American adults is in a law enforcement face recognition network. At least one out of four state or local police departments has the option to run face recognition searches through their or another agency’s system, affecting 117 million people in the United States. These systems are not inherently neutral: they reflect the “priorities, preferences, and prejudices — the coded gaze — of those who have the power to mold artificial intelligence”.

Machines are biased by human vision. Algorithms perform worse when analyzing Black and Brown bodies compared to others, and are up to 100 times more likely to misidentify POCs. According to a study published by MIT and Stanford, these biases exist because algorithms are fed training data that’s more than 77% men and 83% White people. When facial recognition programs search mugshot databases, POCs (particularly young Black men) are overrepresented in the possible matches. Even if the algorithms are fed equitable data to correct for biased training, the use of facial recognition within the existing criminal justice system could just replicate the over-policing of Black and Brown communities.


In early 2018, IBM software was heavily scrutinized for inherent bias in analyzing genders and skin tones. Authors of Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification identify high error involving the analysis of images containing dark-skinned women, while finding most accurate results for light-skinned men. After years of trying to improve accuracy and reduce bias with marginal results, IBM decided to sunset the software so state and federal agencies can no longer misuse the technology. Private companies are finally taking steps to prioritize people over profits. But is that enough?

High-level scorecards inform casual consumers on specific state and federal agency performance, but neglect specific metrics on Racial Bias. We need to better understand how AI contributes to cycles of systemic racism and bias before distributing and profiting. I urge you to consider how burgeoning technology reflects the coded gaze before distributing an algorithm or software system.

“We risk losing the gains made with the civil rights movement and women’s movement under the false assumption of machine neutrality. We must demand increased transparency and accountability. “

Gender Shades

What can you do to stay informed?

  • Follow Mimi Onuoha, a Brooklyn-based artist, researcher, and technologist investigating the social implications of data collection. Her work uses text, code, performance, and objects to explore missing data and the ways in which people are abstracted, represented, and classified.
  • Stay up-to-date with Mother Cyborg (also known as Diana Nucera). She’s an organizer and artist whose work focuses on developing popular education materials that empower communities to use media and technology to investigate, illuminate, and develop visionary solutions to challenges. Her music and laser performance build opportunities to connect stories, invigorate the soul and elevate our collective consciousness of technology.
  • Check out this zine about AI.
  • Consider contributing to Fight for the Future, a nonprofit grassroots advocacy group that organizes efforts to ban unjust facial recognition technology.

Read some technical literature:

Thanks for reading! I’m really interested in hearing your feedback 😃.

Towards Data Science

A Medium publication sharing concepts, ideas, and codes.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app