When Is it Okay to Use AI-Generated Images for Your Photography?

When Is it Okay to Use AI-Generated Images for Your Photography?

I can already see it. Many of you are clenching your fists and saying, "Never!" Certainly, there are many cases in which it's not acceptable to use AI-generated images for your photography. First, I'm going to discuss several instances of when it might be okay to use AI-generated images for photography. Then we'll examine the not-so-great applications.

Acceptable Uses of AI-Generated Images

The original night photo of Joshua Tree National Park before I used Adobe Generative Fill to create the AI fantasy beach scene.

I can think of three uses in which I believe it's okay to use some AI-generated images in your photography.

1. Printing Canvas Wraps

I wanted to make several photography prints for a client who requested my photo printed as a canvas wrap. She wanted it in a specific dimension. When I uploaded my image to the printing service online, I found that it was cropping some of my photograph. I could extend the edges by using black. This would result in the sides of the frame being black. However, the client did not want this. What to do?

To solve this problem, I used the Adobe Photoshop Generative Expand feature. This generated borders that appeared to be a genuine part of the photo. However, they would only be visible on the sides of the frame, not as the main photo. Later, the client decided she also wanted them framed, so they were even less visible!

2. Eliminating Distracting Elements From Photographic Art

Sometimes, we make art with our photographs. Not all photos are photojournalistic or historical in nature, of course. If I have a distracting tree branch or streetlight in my photo and I am creating art, I am going to get rid of it.

Historically, I've cropped. And I've also used the clone tool, Content Aware, or the Remove Tool. Most of the time, this works. But sometimes, using something like Generative Fill works better, leaving fewer artifacts. After all, I have a photo that has now used a small amount of AI to analyze the image and remove the distraction. Is this unethical? Is it more unethical than using Content Aware? Is it more unethical than a portrait photographer removing a zit on someone's nose?

In an ideal world, I would have positioned the camera in such a way that the tree branch or the streetlight wasn't in the photo. But we all know that this isn't always physically possible.

I believe just about all of us can agree that if we are creating photos that are photojournalistic or historical in nature, we should not remove anything.

3. Fixing Perspective 

The reason for creating AI-generated borders here is conceptually similar to the first reason: we are expanding the borders to achieve something and not attempting to create deception.

Occasionally, when I am photographing, I can't avoid keystoning (distortion that causes parallel lines to appear to converge or diverge) of a building. I also cannot physically back up enough to shoot wide enough to allow myself to correct the keystoning. After all, when we attempt to correct the keystoning in post-processing, we lose too much of the edges.

A simple fix for this is to expand the edges. That's right—Generative Expand again. While I can use Content Aware Fill, it rarely looks as good as Generative Expand, and it's considerably more time-consuming to achieve a great result.

I'll expand the edges, then address the keystoning. It's okay if I lose the edges. In fact, what usually happens is that I end up having most or all of the AI-generated edges disappear. Sure, there may be some in the corners. But so often, it's much easier and better-looking than Content Aware Fill or other methods.

As mentioned before, the best solution is to shoot as wide as you can. But in real life, we all know that this isn't always possible. Give yourself grace and use other technology at your disposal (again, for photographic art, not for photojournalism or historical photographs).

No, It's Still Not Okay!

If you believe the above should not be done, you're really going to gnash your teeth when considering "deep fakes" and other attempts at blatant deception. Here are a few, which, in my opinion, are never okay.

Deception

In an informal survey at Yale Daily News, the student paper of Yale College in January 2023, 59% of the respondents thought this photo was an AI-generated image. This, of course, is a real photo, a night photo of a giant sculpture in Borrego Springs with star trails.

In a poll in the Harvard student newspaper, 59% of students polled thought this was AI-generated art, not the night photograph it actually is. Giant rattledragon serpent sculpture created by Ricardo Breceda, Borrego Springs. The photo shows the apparent movement of stars.

I think just about anyone would agree that some are attempting to deceive by using AI-generated images. This has eroded trust. For example, some people no longer believe my night photos are real. But far, far worse, misinformation and deception have eroded public trust. Disinformation campaigns reign supreme. If people no longer believe what they see or what they read, where does that leave us as a society?

Theft

Photographers and other artists often argue that since AI is trained on existing work, it is mimicking or stealing other people's work without their permission. Adobe attempts to work around the plagiarism issue by only having its model learn from Adobe Stock images and public domain content without copyright restrictions. And some argue that we all learn and synthesize different art to some degree, and that it's not much different from what AI does. But that argument leaves a lot of us feeling uneasy at best. It might be one thing to be inspired by other people's art, and quite another to ingest it and regurgitate it without purpose or feeling.

Loss of Revenue and Job Security

There's great debate over how much loss of revenue AI-generated images may cause. I doubt it's done any favors for the microstock industry.

Energy Consumption

A night photo fisheye view of a Westinghouse AC to DC electric converter inside a wooden enclosure, lit with a handheld light during the exposure.

You could also make a successful case for not using AI because it uses an incredible amount of electricity and water. The International Energy Agency (IEA) projects that electricity demand from data centers worldwide is set to double from its current consumption to 945 terawatt-hours by 2030. This amount is slightly more than the entire electricity consumption of Japan today.

For this reason, I have cut back noticeably on my use of ChatGPT, AI-generated images, and other similar tools. But because AI is implemented everywhere now, it's considerably more challenging to avoid using it, even if we are making a phone call to our dentist or performing a search for anything online.

Where Should the Dividing Line Be?

A strong guideline is that if you feel that any reasonable viewer of your photography would feel deceived if you told them how you had created your photograph, then you've crossed the line.

Another version of the half photo/half AI-generated image I made for the header. This one doesn't have the dinosaur, robots, UFO, or mushroom cloud. I thought I'd go with the more absurd one for the article instead of Joshua Tree National Park mysteriously turning into the Cape Cod seashore.

Apart from that, it's a matter of personal comfort. I feel comfortable using AI-generated images in my photography occasionally, as described above. But I respect that many would not. Since I've recently discovered how much energy AI consumes, I will be extremely mindful of how I use it.

Ken is a night photographer with four books of night photography of abandoned locales. His images have been in National Geographic Books, Omni, LA Times, Westways, & elsewhere. Ken had exhibits at La Quinta Museum & Hi-Desert Nature Museum in CA. He loves teaching creative weirdos about night photography in his workshops.

Log in or register to post comments
26 Comments

I really shifted on this a couple of years ago maybe less. I can't remember when I saw the video. It was a Thomas Heaton video about leaving stuff in and I've really actually gone back to just leaving stuff in which you could argue. That's no different to adjusting a light or the shadow or anything else that we adjust in our photos.. I recently shot a wedding on a property that had powerlines in the background. Participants in the photos were sitting on a beautiful old tractor with a Sunset in the background however they were power lines. I asked the clients after editing the photos whether they wanted the powerlines to be taken out and surprisingly well maybe not. Surprisingly they said no leave them in that's part of the story then she went on to tell me that it was one of the last farms in the area to get connected power so that actually fits into the story of the image as well. Sometimes we take out stuff when we don't need to. We need to be really careful about what we take out as that can actually change the structure of the story, it might only be a small thing, but it does change it. I've now got to the point where I'm basically leaving everything in and it does not detract from my photography if your images are good then you're not gonna have a problem. It's just a different way of thinking it's more of a storytelling emphasis than making the photo perfect - good article.

This is such a great, nuanced take on when we should use it, and I really appreciate that. Great points. Just because we CAN doesn't always mean we SHOULD. And I'm just as guilty of that as anyone, sometimes fiddling about in post-processing, attempting to remove some tiny element of the photo only to find that no one cares, and no one notices. :D

So my simple premise now is is what's in the background part of the story you need to have a think about that and I just watched this video. I might see if I can find that video and post it here if that's okay and it just was an eye-opener for me and I really started to think about it.

I think it should be okay to post a link or something like that, thanks.

I'll see if I can find it, mate. I think it was Thomas Heaton. I can't remember but it really resonated with me and I've since taken that process of not deleting stuff out as it's part of the story.

This particular photo you can see there is a powerline at the back however after talking to the participants they decided to leave and wanted the powerlines left in as part of the story as there was a story to the powerlines. It's always really important to just check now over the last few years. I've definitely left stuff in.

Thanks for sharing that, Paul!

That looks pretty great, Nev.

I must confess that lately, I have been removing objects more than I have in the past (which may not be saying much since I frequently didn't remove anything from my photos until recently). Part of it is because it's so easy to do now. I still want to strike that balance and not be too heavy-handed with it, which I don't believe I am, but sometimes, hindsight is 20/20, haha!

comment deleted... I did not read it correctly. I apologize.

you have missed the point100% completely. This special spot to them was in front of their garden and the piece of equipment that they are actually sitting on is not movable. It is also a family tradition that people get married in that spot the actual wires in the background where the wires that were connected to the farm and they were the last house in the region to actually get powered farm so there's a story behind that again. You're clearly not a storyteller. You are clearly a very technical photographer and that's okay. I work on the story. I like to know what it's about and they loved the photos by the way and when people point out the powerlines I tell them the story it is actually part of the photo but you wouldn't of known that this is the difference between a storyteller and a technically driven photographer. I would suggest that you go and watch Thomas Heaton's video that was posted above and your view might shift. It certainly shifted mine and I'm leaving stuff in now.... the little plough they are sitting on is actually again part of the story so there was no other option. And I'll finish by saying when I ask them about the powerlines they said leave them in

comment deleted...

I like how you highlighted a real problem that is happening now. That when you take a good modern type of photograph that takes effort like a long exposure... people's natural assumption on Facebook or other social media is that it must be AI.

Thanks. It's that erosion of trust. And it's with photography, absolutely, but it's with so many other things as well. What happens when you have an entire society of people who mistrust what they see, hear, and read?

What happens is that facts become confused with fiction, and people shape their opinions on what they want to believe, instead of based on an agreeable set of facts. Even if we can't agree on a conclusion or path forward, we can't even agree on the facts any longer. Words are equally powerful for the capacity to deceive, as well as pictures. Makes it really hard to reach consensus on so many issues, from war in the Middle East and Ukraine, to issues facing your local school boards. All sides try to communicate a convincing story, even if not entirely truthful. People find the facts to support their beliefs. The natural result appears to be partisanship and gridlock.

Ed, I feel like I could have written a lot of that. People want affirmation, not information. But I do feel that misinformation campaigns have really kicked this to the curb even more, often using social media to fan the flames. So yes, as you mention, we have a harder time agreeing on facts than ever before. The few things that people seem to agree upon are sports scores. :D

Honesty, trust, truth in photography, whatever you want to call it, took a huge hit when Photoshop was introduced over 30 years ago. I exhibited my photos in a gallery about 15 years ago for a short time. I doubt there was ever one First Friday art gallery opening that I didn't have to field the question: "Did it really look like that, or did you Photoshop it?" Photoshop became a verb. So AI brings nothing new to the table. But I agree that photo manipulation has become so ubiquitous that people nowadays assume an extraordinary photo is fake. Unfortunately there's no going back. Photography will never again be considered genuinely honest.

Photography absolutely took a hit with Photoshop, and yes, it became a verb. Now, while it's still used as a verb to indicate something fake (even though Photoshop does all sorts of post-processing and graphic design), what people often say is, "That's AI." Perhaps people will soon use "AI" as a verb.

People used to generally believe that most photographs were real, but no more.

Thanks for the comments.

People have asked me if that or this was Photoshopped? But when I tell them I use primarily black and white film, that settles the issue. What is misunderstood by the public in general is that in order to get any photograph onto the internet it has to be converted to a digital form, so yes, every image is "Photoshopped". The question becomes, "How Much is too Much"? And for me that is I won't do anything in photoshop to an image that I couldn't do in the darkroom - which I do still operate.

Of course, the thing is that quite a bit can be done in a darkroom, including HDR images and blends and things like that. "Photoshopped" - and now "AI" - means "fake" when most people use it in that sense now, and when that is applied to a real photo, it's typically out of not understanding how one can create photos that might look a little different than photos that they are used to seeing.

The last time I showed a group of my photographic prints in a public setting I had ink jet images mixed in with some silver based images. The question I got often was are those digital or are they real photographs. An as unexplainable as it may seem, the silver prints got a lot more attention, and I actually sold several of the silver prints. That was a nice show for me in that I sold almost $20,000. If they are were that lucrative I would go to them again.

Interesting! You never know what people respond to from photos. Great, successful show, congratulations!

I've been shooting street photography for a project over the last 15 years and there's still another 10 years planned before I do anything with it.

Now there are some that argue if an image is "editorial" then it should never have a single pixel changed. And that is very true of images in news - otherwise we'd never be quite sure what was real. But is that strict ruling still essential to other works where we are recording life around us...? I'd argue that there's scope for a more relaxed approach.

Because let's face it, we can easily skew a message or story by merely moving the camera or by cropping.

I've always been happy to clone out details that I consider distracting to the story of the image. For me, the integrity belongs to the main story or thrust of the image, and not to a branch or element that is unsightly.

Of course now we have AI and it's ability to really modify / add / subtract from an image, and so I've found myself thinking about how to use it whilst maintaining my integrity. (Particularly with the ease of implementing AI to change an image).

My project "London Life" aims to show the people of London through 25-30 years, highlighting the character of individuals and how we live.

So I've set myself a few rules:

I won't add anything to an image - not even a little bit. If it wasn't there before, it won't be there after.
I won't change anything that alters the storyline or significantly changes the perception of what is happening.
I'm happy to remove items that spoil the aesthetics - as long as they don't change the story.
I'm happy to remove items that spoil the aesthetics - as long as it doesn't change the environment too much.
I don't change skies... This would change the environment too much.
I won't change peoples' expressions or faces.
When tiny details might look like print errors, I'll modify them. (Like flyaway hairs or small details on pavements)

But then AI has opened up the possibility of removing people or making bigger changes... :) So...

I'll never add people to scenes.
I won't extend scenes using AI.
It's ok to remove people from scenes if merely waiting an extra moment would have had them walk through shot.
It's ok to remove people (as minimally as possible) if they have no bearing on the action, vibe or story.
It is ok to remove people for aesthetics - if they are small / incidental and might have walked away if I'd waited.
AI is not to be used if it means the sense of story / location / vibe / accuracy is being changed.

My particular brand of street work is to show future generations of how we lived. So for me, transient elements that don't impact the story of a shot are not critical to me. I'll sacrifice absolute accuracy of that 1/500 sec if it means my image tells the story of the moment more powerfully or more cleanly without distractions.

And in many cases it is because the frame I wanted has an element I didn't, whereas the next frame loses the unwanted element but also loses the moment that tells the story. (We've all had that slap-forehead moment...)

I realise there are camps where never a pixel should be changed, and there are camps where even dodging and burning is frowned upon. I always wonder if they insist on women removing makeup for portraits and never advise on how to pose, or if they use light to show an image in a technically unreal way?)

But I rest easy knowing I've captured life, even if I employ time consuming "by hand" methods, or using AI to fill in the gaps of what I want to say or show. We edit at the point of capture and we edit by what images we choose to show. After that, we've already been selective in how we tell a story. :)

Your reasoning seems logical, Lee. That all makes sense to me.

I used to do mostly travel photography, and I would leave everything in, just cropping things out and nothing more. With night photography, I am creating art (hopefully), so more recently, I have been taking out distracting elements a bit more, although by most people's standards, it probably still wouldn't be considered "heavy-handed." But you know, discussing AI is quite a trigger, and there's some who get angry at any usage, especially from an ethical angle, which I do appreciate and touch upon in the article.

Having given a bit of thought to this it seems to me that anyone who has used Photoshop much at all has used AI. For instance, if you have used Photoshop to create a panoramic photograph - that's AI. If you have used the spot healing brush, or the healing brush, you have used AI, since those tools rely on predictive artificial intelligence to do what they do. I make my images, usually from a film scan, and there are always dust spots, and I use the spot healing brush and other healing tools to "repair" the minor damage that film inevitably gets in handling it. The issue isn't do you use AI, since it is almost an inevitability unless the actual RAW image, or negative scan, is presented. The issue is, how much AI is permissible. How much AI can be used is really dependent on what it is you are presenting. The caveat for me is that if the image is created above 50% or so AI, it needs to be noted. That threshold may be very much higher for me, but I need to consider where my barriers are. I would never present an image that was a lot AI, more than a negligible amount without notations though.

oops (deleted), I thought this was reply was on a different article.