Trying to use Gemini’s image generation for background edits, but it’s not quite Photoshop
I was looking into ways to change the background of photos and thought, ‘Maybe Gemini can do this easily.’ I’d seen some articles mentioning AI image generation, and it sounded like it could replace Photoshop for simple things like this. I mostly wanted to change the background of some travel photos I took to something more… exciting, I guess? Or at least, not the boring hotel wall.
So I started playing around with Gemini. The interface is pretty straightforward. You can upload a photo and then give it a prompt. I tried something like, “Change the background of this photo to a tropical beach.” And it did… sort of. The new background was there, but the person in the photo looked a bit… off. Like, the lighting wasn’t matching, and my face looked a little blurry, almost painted. It wasn’t a seamless blend at all. It felt more like it slapped a new picture behind me.
Then I tried to be more specific. I thought maybe I should give it more details. “Change the background to a white sandy beach with palm trees and clear blue water, with the sun setting.” That’s when it got a bit more interesting, but still not quite right. The beach part looked good, but the way I was interacting with the environment felt weird. Like, in one version, my hand was somehow phasing through a palm tree. That’s definitely not what I was going for. It made me realize that AI generators, at least this version of Gemini, are good at creating new images based on a style, but asking it to edit an existing photo, especially to change a whole background, is a different beast.
I remember reading somewhere that you could say, “referencing this photo’s style, create a new image” instead of asking for edits like “change the background” or “change the shirt color.” I tried that. I uploaded a photo of myself in a park and said, “Create a new image in this style, but with me standing in front of the Eiffel Tower at night.” And it produced a really cool, artistic rendition. The Eiffel Tower was there, the lighting was dramatic. It felt like a new piece of art, not an edited version of my original photo. That’s where I think Gemini is stronger – generating entirely new concepts rather than making precise, realistic edits to existing ones.
It feels like a tool for inspiration, not for practical tasks like fixing passport photos or making a vacation snap look professionally retouched. For that, I think you still need something like Photoshop. I looked up how much Photoshop costs, and it’s a subscription, around ₩10,000-₩20,000 a month depending on the plan, which is a lot if you only need it for occasional background changes. There are other apps, like Adobe Express, that might be simpler, but I haven’t tried them for this specific purpose. Maybe those are better for just swapping backgrounds without making the person look like they’re part of a collage.
This whole experience made me appreciate the complexity of photo editing. I always thought changing a background was a basic Photoshop task, and it is, but doing it well requires layers, masks, and careful adjustments to light and shadow. AI can generate amazing visuals, but when it comes to taking a photo you already have and subtly altering it, especially something as prominent as a background, it feels like it’s still a bit behind. I ended up just leaving my travel photos as they were, hotel wall and all. It’s a memory, right?