A new form of art is on the rise in recent years, artificial intelligence (AI) image generation is reaching all corners of the internet. Users can find these images all over reddit, X (Twitter), Facebook, and other social media. While it is mostly likely here to stay, it can create problems for unwilling participants.
AI image generation is the process of using previous images to create a new photo. The most common programs are Midjourney and Stable Diffusion. The user “prompts” the program with a string of keywords, and AI generates a photo in return.
One example is typing the words “tall woman walking a large dog and holding a large newspaper in the style of analog film” and the result is quite realistic.
AI generation is not only for photos either. “Deep fake” is a specific form of generation that uses AI to stitch videos together to appear as one video.
The Issue
While this technology allows non artists to explore a world they never had the chance to, it can easily be used for nefarious purposes.
When browsing online forums, countless stories can be found of horrified people finding that others have created deep fake adult content of . The victims are almost exclusively women, and the perpetrators are most often men who they know. Unfortunately, this is the reality that many women face.
Anyone with a decent computer or phone can easily create these videos and photos. It can manifest as a form of violence against women through the means of exploitation and blackmailing.
Tom Scott is a content creator who specializes in educational and informational videos. In a 2018 video, he discusses the technology used to create these videos and images and the ethics involved. With a background in technology, he was able to create a face swap of himself and a friend.
What about Men?
Of course, this technology can be used to make deep fakes of men as well. One popular example was a few years ago. As a joke, users began to create deep fakes of Nicholas Cage in various movie scenes, creating humorous results. While these tools can be used to harm men, they disproportionately affect women.
There are entire forums and sites dedicated to making deep fake sexual content of women only. These mainly involve female celebrities, but there is a rise of everyday women being used in the same manner.
Shaky Ethics
Some argue that when these women publicly share regular photos of themselves they are consenting to any use of their images. Many, however, disagree. Kristen Zaleski, Director of Forensic Mental Health at Keck Human Rights Clinic at the University of Southern California, has had to deal with multiple such cases. Zaleski investigates men who deep fake women’s faces onto pornographic images or videos, costing the women their jobs.
One example she mentioned was a public school teacher whose Facebook photos were used to generate adult content, causing her to be fired after parents found the images. After the news became public, the website hosting the images nearly crashed from increased traffic.
Unfortunately, adults are not the only targets to this form of violence. Another recent example involves a child psychiatrist being arrested last year for secretly recording his patients and creating explicit images of them.
Zaleski also says, “People viewing explicit images of you without your consent — whether those images are real or fake — is a form of sexual violence.”
What Now?
To prevent more gross misuse, regulators need to step in and set rules for companies who host these services.
While it is a difficult task, there are laws already in place to help protect people against digital sexual extortion.
In March 2022, the Violence Against Women Act Reauthorization Act of 2022, banned non-consensual “revenge porn” on a federal level, and is supported in over 40 states. With amendments, this could extend to non-consensual image generation as well.
Image Spotting 101
With this being such new technology, positive changes in law won’t be seen anytime soon. In the meantime, the best way to protect yourself is to be able to spot fakes.
The biggest giveaways to look for are details in images such as text, fingers, and hair. The text in an image is almost never legible, resembling gibberish.
When looking at hands, look at the number and placement of fingers. AI generated content may feature too many or too few fingers on each hand and awkward positions.
AI generation vigilance can be a skill to hone to protect yourself and others from becoming a victim.