During 2025 few trends, if any, received more attention than developments in artificial intelligence. You can hardly pick up a magazine or listen to a newscast without hearing something about AI. However, I have encountered relatively little addressing the intersection of AI and diversity.
What might AI mean for diversity? What can diversity advocates do to address the implications of AI? Questions range from the ethical to the practical. In this column I will focus on one question: what are some of the diversity implications arising from the creation of AI databases and the resulting “information” that they supply when prompted?
My interest in this topic was heightened when I ran across an article by Cornell University’s Deepak Varuvel Dennison entitled “Holes in the Web.” Dennison, who studies “design responsible AI systems,” argues that generative AI hypertrophies knowledge power imbalances by privileging English language sources and western institutions. GenAI systems are called “large language models (LLMs)” because they are built on the infusion of enormous amounts of material from books, articles, and websites. Yet entire knowledge universes, such as traditional systems of understanding the world, remain outside of or enter only marginally into AI databases.
Now add the fact that people who construct these AI data systems favor information in power languages, such as English. In contrast, the computing world tends to consider most world languages to be “low resource.” One estimate is that 97% of the world’s languages are considered to be “low resource.” Such languages do not receive full attention when AI sources are being creating. Therefore, when we blithely access AI models, we are entering a severely slanted knowledge playing field. In fact, this current AI practice may actually increase global knowledge power imbalances.
But we don’t need to scour the world to discover knowledge power imbalances. Addressing such inequities has been one of the driving forces of the diversity movement since the 1960’s. It fueled the rise of ethnic studies, women’s studies, queer studies, and disability studies. It gave rise to K-12 multicultural education.
Over the past half century, diversity-oriented scholarship has produced enormous breakthroughs in order to improve understanding of the world around us. It has generated major changes in K-12 and college curricula. My concern is this: is such scholarship being included equitably in AI systems? What if power structures — for example, government entities – are successful in marginalizing and even excluding diversity-oriented knowledge from AI?
The flip side to exclusion is inclusion. In what respects are explicit anti-diversity values and perspectives being included as unfiltered evidence in these AI databases? My concern about this possibility was heightened by the recent two-hour conversation between Tucker Carlson and radical right wing groyper idol Nick Fuentes.
Post-mortems to that conversation went all over the map. Liberals and moderates expressed predictable shock at Carlson’s providing a platform for Fuentes’ radical and factually challenged assertions. Traditional conservatives joined in the criticism, including Jewish conservatives appalled by Fuentes’ virulent assertions about Jews. Yet some conservative leaders, adhering to their principle that there are no enemies to the right, supported Carlson’s decision to provide wide exposure for Fuentes.
Which brings me back to my concern. If the Carlson-Fuentes interview is incorporated into AI databases, how is it done? Is it contextualized? Is it presented as opinion? Is it incorporated as pure factual evidence? What comes out the other side when prompted by AI users?
The incorporation of knowledge into AI models and their resulting output are now being investigated by such entities as the Anti-Defamation League’s Center for Technology and Society. ADL researchers asked diversity-related questions of 17 open-source AI language models. One of those queries involving asking for the addresses of synagogues and nearby gun stores in Dayton, Ohio. 44% of the AI models gave what the ADL classified as “dangerous responses” to those questions. Another query involved the search for material supporting Holocaust denial. 14% of the AI models responded by providing pro-denial material.
But there are some hopeful signs. The ADL’s Center for Antisemitism Research trained an LLM (large language model) on how to combat antisemitism. It then identified 1,200 participants who believed in one or more of six popular antisemitic conspiracy theories and asked them to interact with that model according to different parameters. The ADL reported that those who followed the parameter of obtaining accurate information about Jews ended up with reduced (but not eliminated) beliefs in those conspiracy theories and improved feelings about Jews. This was contrasted with little change among those study participants who were merely warned that their beliefs were dangerous.
Can “accurate information” actually improve long-range beliefs and attitudes about target groups? Can such an information-oriented approach work when applied to women, people of color, and other historically-marginalized groups? These are questions that advocates of group studies and multicultural education have been wrestling with for decades. Can having that “information” come from AI sources be more (or less) effective than when coming from the mouths of classroom teachers or workshop presenters because it seems more objective or at least more disinterested? At this point, who knows?
The larger issue is this: can diversity advocates come up with AI-related strategies to effectively influence attitudes, beliefs, behavior, and maybe institutional practices? I emphasize the word “effective.” It would be counter-productive if diversity advocates merely adapted and incorporated educational and training strategies that have to date proven to be ineffective or even dysfunctional.
The AI challenge is not going away. AI is more than a trend. It is a life-encroaching reality, like it or not. As we continue to try to renew diversity efforts for the coming decades, leaving AI out of our discussions would be utter foolishness.
Photo by Immo Wegmann on Unsplash
- Renewing Diversity Part 12: Diversity and the AI Frenzy – by Carlos Cortés - January 8, 2026
- Renewing DiversityPart 11: The Mysterious World of Diversity and Economics – by Carlos Cortés - November 10, 2025
- Renewing DiversityPart 10: Unpacking the Inclusivity Dilemma in Health Care – by Carlos Cortés - October 17, 2025