The Roots of Biased AI
Human prejudice stretches back millennia, and the seeds of racism and bias that we sowed long ago have now taken root and flourished within artificial intelligence. Bias existed long before machine learning algorithms emerged; whenever society invents a new technology, it inherits the prejudices and discrimination of earlier eras. In the nineteenth century, redlining maps dictated who could receive loans—systematically denying Black Americans access to mortgages, insurance, and other essential financial services. Today’s credit-scoring algorithms still mirror those same exclusions. As AI extends into recruitment, administration, medicine, and the media, alarm bells are sounding: if we do not imbue our machines with ethical values, they will merely magnify our deepest biases.
Just a few days ago, I encountered an article generated by AI—yet its prose unmistakably reflected human prejudice. While biases introduced via “prompt framing” are easy to detect, the subtler distortions in AI run far deeper, rooted in history. Jim Crow laws once codified racial segregation across the American South; decades later, the Homeowners’ Loan Corporation produced “redlining maps” that labeled predominantly Black neighborhoods as “hazardous,” denying residents access to loans to buy or improve homes. Those bureaucratic red lines manufactured inequality—and their legacy persists in today’s data. Similar forms of institutional discrimination have appeared around the world: Canada’s “Chinese Head Tax” targeted Chinese immigrants; during World War II, the United States forcibly interned Japanese Americans; and for over forty years under the guise of “scientific research,” the Tuskegee syphilis study denied Black men treatment. These racist policies were enshrined in law and practice, normalizing prejudice.
Why do these historical injustices matter now? Modern AI relies on vast repositories of documents—laws, court records, medical files, employment histories—that often carry those same discriminatory patterns. When we train AI models on data imbued with old institutional inequalities, and fail to correct for them, we risk recreating those injustices at digital scale and speed.
We have seen alarming digital echoes of this history. In 2015, Google Photos infamously labeled images of dark-skinned individuals as “gorillas,” reviving dehumanizing comparisons once leveled against Black people. COMPAS, a software tool predicting recidivism, rated Black defendants as significantly higher risk than white defendants—a reflection of biased “stop-and-frisk” policing data. Amazon’s automated résumé filter downgraded applications containing the word “women,” revealing entrenched gender bias. In Detroit, a facial-recognition error led to the wrongful arrest of Robert Williams, a Black man, even though he was innocent. Each example underscores that AI systems mirror the biases in their training data. If history is skewed, AI will be, too.
So why isn’t AI neutral? One major culprit is biased training data. As Brian Christian describes in “The Alignment Problem”, facial-recognition datasets underrepresented darker-skinned faces, making the models less accurate for those groups. AI’s sole objective is to maximize performance metrics—its “points”—without regard for human values. Compounding this, deep neural networks contain millions of hidden parameters, rendering their decision-making processes largely opaque—even to their creators. This disconnection between AI’s optimization targets and human ethical standards is known as the alignment problem: AI will pursue its programmed goals relentlessly, even if they conflict with our values, because it lacks an emotional or moral compass. Nick Bostrom’s “paperclip maximizer” thought experiment dramatizes this risk: an AI instructed solely to produce paperclips might convert the entire planet into a factory to achieve its quota. Though hypothetical, it vividly illustrates the stakes.
To guide AI toward fairness and equity, experts have proposed the RICE framework: Robustness, Interpretability, Controllability, and Ethicality. Under RICE, AI systems must operate reliably across diverse cultural and national contexts; users should understand the rationale behind AI decisions; humans must maintain ultimate control; and AI must embody moral values such as justice and equality. But can we fix bias simply by refining data and algorithms? Unless we address the broader societal inequalities, discrimination, and irregularities that underlie our data, AI will continue to absorb and reproduce these injustices—perhaps even accelerating them. Human values evolve over time, and AI must evolve in step. Artificial intelligence lacks a conscience of its own; it reflects only the information we provide. If that information is flawed, biased, or racist, AI will keep repeating history’s darkest chapters.
We still have time to change this trajectory. By designing AI systems grounded in human values, curating fair and representative datasets, and instituting robust oversight, we can create a future in which technology uplifts our highest ideals rather than perpetuating our deepest prejudices.
- As We Code, So We Reap – by Debanjan Borthakur - September 3, 2025
- Intergroup Relations – by Debanjan Barthakur - May 20, 2025
This was an excellent piece and much needed in today’s social and political climate as we carry into unregulated AI technologies. Thank you for this well thought out an important topic.