Artificial Intelligence: The Next Front of the Fight Against Institutional Racism

Evelyn Johnson -
Illustration: © IoT For All

It’s been three months since the world was shaken by the brutal murder of George Floyd. The image of a white police officer kneeling on a black citizen for 8 minutes and 46 seconds are still fresh in America’s collective memory.

This wasn’t the first case of racially-charged police brutality in the US. And unfortunately, it won’t be the last one either.

Racism in this country has deep roots. It is a festering wound that’s either left ignored or treated with an infective medicine. There’s no end in sight to institutional racism in the country and to make matters worse, this disease is finding new ways to spread.

Even Artificial Intelligence, which is said to be one of the biggest technological breakthroughs in modern history, has inherited some of the prejudices that sadly prevail in our society.

Can AI Be Biased?

It would’ve been ridiculous to suggest that computer programs are biased a few years ago. After all, why would any software care about someone’s race, gender, and color? But that was before machine learning and big data empowered computers to make their own decisions.

Algorithms now are enhancing customer support, reshaping contemporary fashion, and paving the way for a future where everything from law & order to city management can be automated.

“There’s an extremely realistic chance we are headed towards an AI-enabled dystopia,” explains Michael Reynolds of Namobot, a website that generates blog names with the help of big data and algorithms. “Erroneous dataset that contains human interpretation and cognitive assessments can make machine-learning models transfer human biases into algorithms.”

This isn’t something far into the future but is already happening.

Unfortunate Examples of Algorithm Bias

Risk assessment tools are often used in the criminal justice system to predict the likelihood of a felon committing a crime again. In theory, this Minority Report type technology is used to deter future crimes. However, critics believe these programs harm minorities.

ProPublica put this to test in 2016 when it examined the risk scores for over 7000 people. The non-profit organization analyzed data of prisoners arrested over two years in Broward County Florida to see who was charged for new crimes in the next couple of years.

The result showed what many had already feared. According to the algorithm, Black defendants were twice as likely to commit crimes than white ones. But as it turned out, only 20% of those who were predicted to engage in criminal activity did so.

Similarly, facial recognition software used by police could end up disproportionately affecting African Americans. As per a study co-authored by FBI, face recognition used in cities such as Seattle may be less accurate on Black people, leading to misidentification and false arrests.

Algorithm bias isn’t just limited to the justice system. Black Americans are routinely denied programmers that are designed to improve care for patients with complex medical conditions. Again, these programs are less likely to refer Black patients than White patients for the same ailments.

To put it simply, tech companies are feeding their own biases into the systems. The exact systems that are designed to make fair, data-based decisions.

So what’s being done to fix this situation?

Transparency is Key

Algorithmic bias is a complex issue mostly because it’s hard to observe. Programmers are often baffled to find out their algorithm discriminates against people on the basis of gender and color. Last year, Steve Wozniak revealed that Apple gave him a 10-times higher credit limit than his wife even though she had a better credit score.

It is rare for consumers to find such disparities. Studies that examine discrimination on part of AI also take considerable time and resources. That’s why advocates demand more transparency around how the entire system operates.

The problem merits an industry-wide solution but there are hurdles along the way. Even when algorithms are revealed to be biased, companies do not allow others to analyze the data and aren’t thorough with their investigations. Apple said it would look into the Wozniak issue but so far, nothing has come of it.

Bringing transparency would require companies to reveal their training data to observers or open themselves to a third-party audit. There’s also an option for programmers to take the initiative and run tests to determine how their system fares when applied to individuals belonging to different backgrounds.

To ensure a certain level of transparency, the data used to train the AI and the data used to evaluate it should be made public. Getting this done should be easier in government matters. However, the corporate world would resist such ideas.

Diversifying the Pool

According to a paper published by New York University research center, the lack of diversity in AI has reached a “moment of reckoning”. The research indicates that the AI field is overwhelmingly white and male due to which, it risks reasserting power imbalances and historical biases.

“The industry has to acknowledge the gravity of the situation and admit that its existing methods have failed to address these problems,” explained Kate Crawford, an author of the report.

With both Facebook and Microsoft having 4% of the workforce that’s Black — it’s quite clear that minorities are not being fairly represented in the AI field. Researchers and programmers are a homogeneous population who come from a certain level of privilege.

If the pool is diversified, the data would be much more representative of the world we inhabit. Algorithms would gain perspectives that are currently being ignored and AI programs would be much less biased.


Is it possible to create an algorithm that’s completely free of bias? Probably not.

Artificial Intelligence is designed by humans and people are never truly unbiased. However, programs created by individuals from dominant groups will only help in perpetuating injustices against minorities.
To make sure that algorithms don’t become a tool of oppression against Black and Hispanic communities —public and private institutions should be pushed to maintain a level of transparency.

It’s also imperative that big tech embraces diversity and elevates programmers belonging to ethnic minorities. Moves like these can save our society from becoming an AI dystopia.

Evelyn Johnson

Guest Writer
Guest Writer
Guest writers are IoT experts and enthusiasts interested in sharing their insights with the IoT industry through IoT For All.
Guest writers are IoT experts and enthusiasts interested in sharing their insights with the IoT industry through IoT For All.