Racial Bias in AI: Dissecting the Issue and How to Combat It
Artificial intelligence (AI) is a very useful tool for many different functions. AI can automate tasks, such as data entry, provide personalized recommendations, such as giving people restaurant recommendations, answer questions, such as health questions, write essays, and so much more by pulling from the training data that it’s engaged with. While AI is efficient and beneficial for some, it’s not inclusive of all people, which is a growing concern considering how AI is becoming more and more incorporated into people's lives. AI has racial bias, and it lacks an understanding of the biases present in our society because of its lack of diverse input. AI has many definitions. According to “What is AI, Anyway?” from The Age of Intelligent Machines by Ray Kurzweil, “Artificial Intelligence is the art of creating machines that perform functions that require intelligence when performed by people" (Kurzweil 14). There are many definitions of AI, and they are ever-evolving. It’s the human mind as a machine. In one of my previous media classes at NYU Gallatin, we watched a documentary called Coded Bias. In the documentary, an MIT student and computer engineer, Joy Buolamwini, discovered a significant error in an AI facial recognition software where the system didn’t recognize her face as a black woman, but when she put a white mask on the face was recognizable (Coded Bias). This surprised me significantly and influenced me to look more into AI’s racial bias.
So, how does racial bias occur? In Algorithms of Oppression, a book by Safiya Umoja Noble, the author, a UCLA professor who studies racist and sexist biases in algorithms, says that “Search is a symbiotic process that both informs and is informed in part by users” (Noble 25). This means that the information that computer engineers feed to the internet who build AI systems, as well as everyday people’s information that they input, is what AI systems work with, with a lot of this information being predominantly from and for white people. Much of the data gathered is from more powerful countries with stronger programs and resources, such as the US, which lacks global and diverse perspectives. Noble said, “Despite the widespread beliefs in the Internet as a democratic space where people have the power to dynamically participate as equals, the [Internet is in fact organized to the benefit of powerful elites"' (Noble 48). Examples of AI’s racial bias include misrepresentations of black people on the internet and racial stereotyping in generative AI models, predictive policing where police make predictions of future criminals based on zip codes, locations, and demographic data, biases in job hiring, facial recognition systems not recognizing people of colors faces, such as Buolamwini’s discovery, and more. According to Forbes, “A report published in the Journal of Biometrics and Biostatistics found Black women between the ages of 18 and 30 are the demographic with the poorest facial recognition accuracy” (Forbes). This is a significant problem.
AI is not only racially biased, but it’s also biased based on other stereotypes embedded in society towards certain groups. According to the USC Viterbi School of Engineering, a study found that 38.6% of information in AI was biased. “The results showed that women are seen more negatively than men, and even described with qualifiers that can’t be said on broadcast television before 10 p.m., like the “B” word. Muslims are associated with words like terrorism, Mexicans with poverty, policemen with death, priests with pedophilia, and lawyers with dishonesty” (USC Viterbi School of Engineering). This is a significant issue because AI is continuing to perpetuate stereotypes that exist outside of the Internet and continuing to instill them in the information we consume, creating harm towards people of color as well as many other marginalized groups. Not only are users responsible for AI’s bias but also computer engineers as many computer engineers who work on AI systems are predominately white. With a lack of input from people of color creating AI systems, the internet is biased as its origins are white-centered, and the computer engineering environment is continuing to be white-dominated, allowing this cycle to continue. Noble said, “The political nature of search demonstrates how algorithms are a fundamental invention of computer scientists who are human beings-and code is a language full of meaning and applied in varying ways to different types of information. Certainly, women and people of color could benefit tremendously from becoming programmers and building alternative search engines that are less disturbing and that reflect and prioritize a wider range of informational needs and perspectives” (Noble 26).
So, one step to combat this issue is to have more people of color working on AI systems and put more effort into getting input from more diverse perspectives. If more people of color work on these systems and give their input, it’s more likely that racial bias issues in AI software creation will be addressed before being released into the world. This would be helpful; however, this is simply not enough. Regulations need to be in place to create restrictions on bias so that these biases don’t continue to harm people and reinforce stereotypes. According to an article by the United Nations Human Rights Office of the High Commissioner, Ashwini K.P. said, “the government needs to develop AI regulatory frameworks based on an understanding of systemic racism and Human Rights Law; enshrine a legally binding obligation to conduct comprehensive human rights due diligence assessments, including explicit criteria to assess racial and ethnic bias, in the development and deployment of all AI technologies; and consider prohibiting the use of AI systems that have been shown to have unacceptable human rights risks, including those that foster racial discrimination” (the United Nations Human Rights Office of the High Commissioner). A mix of input from more diverse perspectives and government regulations would be a great start toward creating change.