In 1816, 19-year-old Mary Wollstonecraft Godwin captivated her close friends with a story about a monster. Two years later, now married and known as Mary Shelley, she stunned the reading world with her novel, Frankenstein, or, The Modern Prometheus (1818).
Frankenstein’s monster was a man. It remained so in most film renditions. Notable exceptions were the 1935 “The Bride of Frankenstein,” who was actually created by Dr. Victor Frankenstein as a partner for his original male version, and the 2026 “The Bride.”.
Frankenstein’s monster was not totally evil. He was big, strong, and often unaware of his ability to create havoc because of his size and strength. In some versions, such as Guillermo del Toro’s 2025 film, the monster’s human sensitivities figure prominently.
Today we are dealing with a monstrous new creation, artificial intelligence (AI). To the best of our knowledge, AI doesn’t have feelings. However, we know something else. AI is male tilted, and some of our fellow human beings are responsible.
Thus we face a dilemma: how can we human beings address the mail-tilted reality that is built into AI. We would like to propose what we call our Triple I strategy.
***Interrupt
***Interrogate
***Identify
Consider how Turkish novelist Beyza Doğuç addressed the fact of AI’s male tilting. She asked generative AI to write a story about a doctor and a nurse. Guess what came out? Right. The doctor turned out to be a man and the nurse was a woman. Doğuç repeated this exploration process with various scenarios. Each time AI responded with characters embodied with similarly gendered roles and qualities.
Unwilling to surrender to AI, Doğuç asked it why it consistently answered with gender-biased responses. AI answered that it did so because it had been trained to do so. AI pointed specifically to a process known as “word embedding.” That is, how words are encoded with meaning, including associations with other words. Given that AI is encoded and trained to associate women and men with different skills, capacities, and interests, its output will reflect those biases.
Generative AI is trained on something called Large Language Models (LLMs). Those models are drawn from the corpus of human history — make that Western human history — in which men dominated most fields of endeavor and, just as important, most documenting of those endeavors. Ergo, male achievements, male perspectives, and male interpretations pervade most LLMs used to train generative AI, thereby relegating female authors to a relatively limited role. Therefore, it’s inevitable that current generative AI is male dominated. Like Frankenstein’s monster, it cannot be blamed for the strengths and weaknesses instilled by its creators.
So where does that leave us? We can blithely move through the day hoping that AI will help us without unduly disrupting our lives. We can learn about the enormity of AI and then throw up our hands in despair because we feel like it is so powerful and far beyond our influence. Or we can take action – including individual and collective action – to address the reality and possible permanence of male-tilted AI.
To date there have been numerous analyses of systemic male domination of generative artificial intelligence and the implications for organizations and institutions. Those analyses highlight the issue of confirmation bias and suggest a narrow range of “solutions,” such as:
***Striving to create more diverse teams within organizations.
***Training more women to specialize in AI development.
***Identifying and analyzing who produced the various sources and data sets on which AI draws.
***Practicing critical thinking to avoid and address the inherent limitations of AI.
Let’s return to novelist Beyza Doğuç. Refusing to merely sit back and let AI provide its gender-biased responses, she continued to ask probing questions that unearthed more revealing answers. Her use of critical thinking and her questioning techniques inspired us to develop our Triple I strategy involving a line of questions that individuals can repeat, with additions and variations. Here are those three steps. (Note: while we are using the example of gender, our approach can be adapted for other forms of diversity.).
Interrupt: First, Interrupt. Remember the three-step axiom, Ready, Set, Go. Unfortunately, AI use can easily fall into the two-step Ready, Go. You ask AI to complete a task, which it does (Ready), and then you unreflectively act on it (Go). Set, therefore, gets lost.
To restore Set in the use of AI, individuals need to assert power by thinking analytically and reflecting strategically. People can and should assert their power by Interrupting the AI process and taking the reflective time to establish Set. Use AI as a contributor to, not the totality of, the overall process.
In some respects this step parallels ideas espoused by psychologist Daniel Kahneman in his now-classic 2011 book, Thinking, Fast and Slow. By interrupting AI and emphasizing Set, you begin to assert control over how you use AI as your tool. You can’t slow AI down. However, you can take yourself off auto-pilot to slow down your thinking and the process of using AI.
Interrogate: Beyza Doğuç used this strategy to investigate the innards of the AI operation. She asked probing questions. She forced AI to reflect on itself. She was not a passive “consumer” of AI, but rather she made AI her thinking partner.
AI is trained to draw on information that it has been fed, to make word associations, and to use those associations to make inferences. Given its LLM origins, AI is skewed male. Therefore, at the Interrogation stage your questions should ask AI to probe its own thinking, not provide definitive answers. That includes avoiding dyadic thinking and questions: Yes-No, Right-Wrong, Either-Or questions. Such questions not only fail to elicit nuanced answers. They also reinforce AI’s dogmatic gendered proclivities.
Consider asking AI questions like the following.
***What can you do to make your answers more gender inclusive?
***What gender biases are present in the report you just produced?
***How can you eliminate gender biases from this presentation?
***What types of sources may be missing from your training data, which can cause your responses to be less gender flexible and inclusive?
***What strategies can we use to counter the gender biases built into your system?
***What questions should we ask to mitigate gender bias?
Make AI work for us. Don’t let it to take the easy route of drawing on its male-tilted training, which produces gender-tilted definitive responses. Make AI probe itself, maybe even baffle itself.
Identify: Having interrogated AI, you, the individual, can now move to the third stage. After exploring and surfacing gender tilting, you are now ready to identify a better-informed action plan before you move to Go. AI is no longer in charge of the process nor definitively proscribing our action. The process begins and ends with you.
In the fostering of individual empowerment in pursuit of equity and inclusivity, AI can become an ally despite its built-in, training-fostered biases. This can happen if we individually and collectively take the time and make the effort to elevate human intervention in the process of using it. In striving for that goal, we offer our three-step Interrupt-Interrogate-Identify (Triple I) process to enhance the role of people as strategic thinkers, effective actors, and liberated users, not passive consumers, of AI.
Photo by Steve Johnson on Unsplash