As many of you know, I’m a huge proponent of AI. I’ve used it to create AI-generated fact websites, jingles for video intros, real estate listing videos, recipe websites, and even a video game (that’ll be out soon). It’s become a core part of my toolkit, streamlining my work cycles and increasing my productivity. AI is like an assistant who works 24/7 and is both smarter and dumber than me.
However, I’ve noticed the broad abuse of AI, particularly with AI-generated images shared on Facebook groups. These images might show a mother seal with her pups, captioned with “Proof God is good. Comment and share if you agree.” Despite thousands of comments, few notice the seal has 24 fingers. This highlights the danger and how the malicious use of AI can go unnoticed. There is one area of AI-generated images that I have recently noticed an enormous spike in output, and it has an alarming potential future.
Understanding AI Image Generation
AI image generation leverages algorithms, particularly those in the realm of deep learning, to create images that can often appear indistinguishably real. Tools like DALL-E, MidJourney, Meta AI, and Stable Diffusion use vast datasets to learn how to generate realistic images based on textual descriptions. While these tools have incredible potential for creative and practical applications, their misuse has become a growing concern. Specifically, because there is such a large portion of our population that can’t discern the difference between a fake image and a real image. It takes a keen eye to tell the difference, but there are clues.
The Problem Goes Beyond Stolen Valor
Stolen Valor is a pretty despicable thing; we’ve all seen the videos of people dressed up like US soldiers to get free things or benefits. And doing that for “Likes” on Facebook is just as bad. But there is a much deeper concern that goes far beyond stolen valor. These groups are building audiences. They are creating groups of people who they know can’t tell the difference between AI and authentic. The engagement on some of these pages is astonishing. And in today’s day and age, these numbers are not organic traffic; they are being paid for. To add to it, the Facebook page “Gina B. Hernandez” that creates AI photos of US soldiers is based in Kosovo. Why would a page outside of the US invest in building a US based audience of people who are unable to tell the different between AI and authentic?
With a US election coming up, and these posts catering to a more conservative mindset, it makes sense that these groups are building an audience and will begin posting politically charged content in the very near future for the purposes of swaying someone’s opinion based on AI-generated content.
Here are just some examples of images I have found using fake images to generate engagement.
How Can This Problem Be Solved?
There are a few things. Some are just being involved in the social media comments of your friends, others should be the responsibility of platforms. But it’s important that we do something.
Platforms: For starters, there are tools that can determine if an image has been generated by AI. Facebook, Twitter, etc., could integrate this tool into each image posted on their platform. Then, flag the images as having been generated by AI.
Let People Know: It seems to me that the largest group of people who don’t realize they are looking at AI are people in their 60s and up. Talk to them. When you see they have commented on a clearly fake post, let them know. Don’t be a jerk; it’s hard to tell. But make sure they know. Blindly believing what you see in an AI world now requires people to take a closer look at what they are seeing.

Letters
Spelling
In the road?
Zoom in...
