Timon Harz
December 16, 2024
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.

When I jumped into the buzz around Lensa, the viral AI avatar app, I was excited to see what it could do. After all, it had become wildly popular, especially after the addition of its AI-powered "Magic Avatars" feature that turns selfies into digital portraits. My colleagues had stunning results—astronauts, fierce warriors, and album cover-worthy images. I was expecting the same.
But when I tried it myself? The results were anything but flattering. Out of 100 avatars, 16 were topless. And another 14 showed me in ridiculously revealing outfits and highly sexualized poses.
I’m of Asian descent, and the AI seemed to fixate on that, producing avatars that looked like they were ripped straight out of an anime or video game. Most shockingly, a disturbing number of my avatars were nude or near-naked. A few even depicted me crying. Meanwhile, my white female colleague got far fewer sexualized images, with just a couple showing cleavage. Another colleague of Chinese descent? They got results shockingly similar to mine—endless sexualized, pornographic avatars.
Is this the future of AI? Let’s just say, I wasn’t expecting this kind of "personalization."




Lensa’s obsession with Asian women was so intense that even when I instructed the app to create male avatars, it still produced female nudes and sexualized poses.

The hypersexualized results I received aren’t surprising, says Aylin Caliskan, an assistant professor at the University of Washington who researches biases and representation in AI.
Lensa’s avatars are generated using Stable Diffusion, an open-source AI model that creates images based on text prompts. Stable Diffusion is trained on LAION-5B, a massive dataset compiled by scraping images from the internet.
Because the internet is flooded with images of naked or nearly nude women, along with pictures that reinforce sexist and racist stereotypes, this dataset is heavily biased toward those kinds of images.
This results in AI models that sexualize women, regardless of whether they want to be depicted that way, says Caliskan—especially women from historically marginalized groups.
In fact, researchers Abeba Birhane, Vinay Uday Prabhu, and Emmanuel Kahembwe discovered that AI training datasets like the one used for Stable Diffusion are riddled with racist stereotypes, pornography, and explicit images of rape. Their findings were made possible because the LAION dataset is open-source. In contrast, other popular image-generating AIs like Google’s Imagen and OpenAI’s DALL-E are not open, but are built on similar datasets, suggesting this is a problem across the entire industry.
As I noted in a September report on Stable Diffusion’s launch, searching the model’s dataset for terms like “Asian” overwhelmingly brought up pornographic content.
Stability.AI, the company behind Stable Diffusion, released a new version of the AI model in late November. A spokesperson explained that the original model included a safety filter, which Lensa seems to have ignored, as it would have prevented the explicit outputs. Stable Diffusion 2.0 filters content by removing frequently repeated images, meaning that the more often something appears—like Asian women in sexualized scenes—the stronger the AI model's association becomes.
Aylin Caliskan, who has studied CLIP (Contrastive Language-Image Pretraining), the system that helps Stable Diffusion generate images, found it riddled with gender and racial biases. CLIP learns to match images to descriptive text prompts, and Caliskan discovered that it associates women with sexual content, while men are linked to professional, career-related fields like medicine, science, and business.
Interestingly, my Lensa avatars looked more realistic when my photos went through male content filters. I received avatars of myself dressed in clothes (!) and posed neutrally. In some images, I even wore a white coat that could have belonged to a chef or doctor.
But it’s not just the training data that’s at fault. Companies behind these models and apps make active decisions about how to use the data, says Ryan Steed, a PhD student at Carnegie Mellon University who studies biases in image-generation algorithms.
“Someone has to choose the training data, decide to build the model, and determine whether or not to take steps to mitigate those biases,” he explains.
The developers behind the app have made a clear choice: male avatars get to wear space suits, while female avatars get cosmic G-strings and fairy wings.
A spokesperson for Prisma Labs, the company behind Lensa, acknowledges that “sporadic sexualization” of photos happens to people of all genders, but in different ways.
The company explains that because Stable Diffusion is trained on unfiltered data scraped from the internet, neither Prisma Labs nor Stability.AI, the creators of Stable Diffusion, “could consciously apply any representation biases or intentionally integrate conventional beauty standards.”
“The unfiltered, human-created online data introduced the model to existing biases,” the spokesperson says.
Despite this, the company claims it is actively working to address the issue.
In a blog post, Prisma Labs mentions that it has adjusted the relationship between certain words and images to reduce biases, though the spokesperson didn’t provide further details. Stable Diffusion has also made it more difficult to generate explicit content, and the LAION database creators have added NSFW filters.
Lensa, the first app to achieve massive popularity using Stable Diffusion, is unlikely to be the last. While it may seem fun and harmless, the app can be misused to generate nonconsensual nude images of women based on their social media photos or even create explicit images of children. The harmful stereotypes and biases it perpetuates could have a profound impact on how women and girls view themselves and how they are perceived by others, Caliskan warns.
“In 1,000 years, when we look back, as we’re shaping the thumbprint of our society and culture through these images, is this how we want to represent women?” she asks.
Press contact
Timon Harz
oneboardhq@outlook.com
Other posts
Company
About
Blog
Careers
Press
Legal
Privacy
Terms
Security