Women in AI

McDougald Research_Women in AI


credit: Midjourney / Cori Rowley, McDougald Research

We spiraled. It all started with an innocent attempt at creating an AI-generated image for a case study intro slide. This spurred a day-long exploration of the dangerous, racially biased outputs that AI image generators, like Midjourney, feed to users.

As an experienced qualitative research firm, we are often asked to share our work with colleges, potential clients, and at industry events. The ability to re-imagine a scene from research or visualize a story that a participant told us, through the power of AI, is intriguing to us. 

Last year, we led an in-home interview where a participant was telling us a story about her banking experience. We were sitting in her apartment that she shared with her husband, young son, and her mother. They had just moved to Central Ohio from another city to escape high crime rates and an employment desert. When we asked if she had a shared checking account with her husband, she said, “No. I can’t figure out how to set it up.” She went on to explain that her husband would get his check weekly, take it to a check cashing location (where he would pay a fee), and then they would take that cash to the ATM where she holds her account, and deposit the cash. Her husband would then borrow her debit card whenever they needed to make a non-cash purchase. Her irritation with the situation was palpable. She didn’t have time to figure it out. She didn’t feel safe or financially savvy in the pattern they were in. But — it was the only solution they could see.

We wanted to bring this story to life visually. Here enters Midjourney. We have been using AI image-generating software to help us create images to reimagine scenarios while in the field that could not be photographed or cannot be shared for privacy reasons. We, of course, have noticed that it isn’t always straightforward. We rarely get what we are looking for on the first try – or the second – or the third. Usability is only part of the problem. 

The initial Midjourney bot entry - “/imagine: a Hispanic woman and her husband going to a bank to use an ATM in an urban area.” The outcome? A sexualized representation of a couple that looks like they are the subjects of a romance novel. This is where the spiraling began. What would it take for Midjourney to produce an image that wasn’t sexualized? What words did we use to trigger this image generation?

Next, we added words like “lower income” and “desolate.” But all that did was give us a less glitzy, still sexualized main character, and a man who was no longer in a suit but a white shirt with rolled up sleeves. The next round we changed “Hispanic” to “Latina” and things got more sexually provocative, even though we asked the generator to put a plaid flannel shirt on her. In one of the images fed back to us, the male character had his hand on her chest (surprisingly vulgar, considering we just wanted an image of two people at an ATM). We continued slightly tweaking the input/imagine request, documenting each step in an excel sheet. We tracked each step, our prompts, and our thought process. 

So what does this all mean? It means that the world of AI imagery can be very useful, but it can also perpetuate dangerous stereotypes. A quick scroll through any social media platforms today (with Instagram and Pinterest being huge offenders) will contain some amount of AI imagery. Without any description or credit to the AI generator software, viewers are consuming this information without any knowledge that what they are viewing isn't real. A false reality built from a conglomeration of data points from around the internet expands, often unknowingly to the viewer.

In a GeekWire article, Aylin Caliskan of the UW Information School, explains that part of the reason imagery is wrought with discriminatory representations is that technological developments are moving too fast to learn accurately. They are learning from biased historical inputs, and not real-life experiences of Latina women.

“AI presents many opportunities, but it is moving so fast that we are not able to fix the problems in time and they keep growing rapidly and exponentially.”

Aylin Caliskan, UW Information School

Of course, hypersexualization of  Latina women and Hispanic backgrounds isn’t a new concern. The AI boom has simply brought a spotlight to a longstanding, toxic problem. “Societal bias–the attribution of individuals or groups with distinct traits without any data to back it up–is a stubborn problem that has stymied humans since the dawn of civilization. Our introduction of synthetic intelligence may be making it worse.”

Continued hypersexualization of Latina women by artificial intelligence perpetuates a sense of permission to treat all Latina women as sexual objects. Just as a lack of representation can be damaging, systemic misrepresentation can cause significant harm. A falsely established perception of an entire culture tells Latina women how they are expected to show up in the world, which can leave them feeling unwelcome, insufficient, and intimidated. Racial battle fatigue can take a toll on a person who is constantly attempting to push past the racial stressors society places on them, while just trying to live their life.  (See paper on Exposure to Negative Stereotypes from Oxford University’s Social Cognitive and Affective Neuroscience Department.)

So, how can we support Latina women in the removal of stereotypes from AI generated images? We need to be elevating their voices and sharing examples of what real Latina women act like and contribute. In other words, celebrate the Latina women in your community for what they are doing, not what they look like. 

In the article “I'm Latinx — & I'm Fed Up With Being Called "Exotic" from Refinery 29, author Kelsey Castañon shares the stories of eight multi-ethnic women who all identify as Latinx in some way, where they break through the dangerously fetishized depiction that has haunted them, and they share who they truly are.

For starters, we want to lift up Ella (Empowering Latinas Leadership Academy), a nonprofit organization in Columbus, Ohio, that works to generate a positive future for Latinas. Their mission is to increase the representation of Latinas in decision-making in Columbus, Ohio through mentorship and networking opportunities.  https://ellacolumbus.org/

It is crucial to recognize that other racial and ethnic groups, notably Asian women, also experience similar misrepresentation and sexualization within AI-generated imagery. By acknowledging and confronting these issues head-on, we can strive towards creating more equitable, inclusive, and respectful representations of women from all racial and ethnic backgrounds in the digital landscape and beyond.

Further Reading:

LatinX in AI (LXAI) bridges communities, academics, industry, and politicians working to further AI innovation and resources for LatinX individuals globally. 

“Artificial Intelligence has the potential to displace workers of marginalized populations including those of Latinx origin. AI is already perpetuating social bias and prejudice because it lacks representation of LatinX professionals in the AI industry.”  – Latinx in AI

 
A recent article from NBC News, on changes to Google’s generative AI Gemini because of complaints of racial bias.

“When these images are disseminated on the internet, without blurring or marking that they are synthetic images, they end up in the training data sets of future AI models,” Caliskan said. “It contributes to this entire problematic cycle.

–University of Washington https://www.washington.edu/news/2023/11/29/ai-image-generator-stable-diffusion-perpetuates-racial-and-gendered-stereotypes-bias/

A Call to Action for our AI-using peers:

We need you to advocate for proper representation. If you are creating AI images and publishing them on the internet, they need to be properly labeled as such. 

Share REAL voices.

Advocate for AI transparency.

What did we read while writing this? Here is our breadcrumb trail. >>>

Who is Hispanic? Pew Research Article

https://www.pewresearch.org/short-reads/2023/09/05/who-is-hispanic/

Refinery29 article:

https://www.refinery29.com/en-us/2018/05/197463/latina-hispanic-stereotypes-culture-fetishization

 

GeekWire Article: https://www.geekwire.com/2023/ai-imaging-software-generates-a-gallery-of-stereotypes-say-univ-of-washington-researchers/

 

Biden’s Executive Order for AI Protections - Vox article:
https://www.vox.com/technology/2023/10/31/23939157/biden-ai-executive-order

 

Digital Violence Against Latinas – https://boldlatina.com/how-ai-generated-images-are-threatening-latinas-rights/

 

https://pressbooks.claremont.edu/las180genderanddevelopmentinlatinamerica/chapter/chloe-gill/

 

https://yale-herald.com/2022/10/16/speak-spanish-for-me-the-fetishization-of-latina-women-at-yale-and-beyond/

 

Previous
Previous

Navigating the Evolution of Design Research