How ChatGPT is turning casual snapshots into privacy nightmares


Over the last few weeks, OpenAI's new visual reasoning models have taken center stage in a viral trend known as "reverse location guessing." Think of it as GeoGuessr meets CSI, where users upload casual photos such as blurry bar shots, street corners, or even selfies, and then ask ChatGPT to figure out where the photos were taken.

No GPS tags. No EXIF data. Just visual context. In many cases, ChatGPT delivers shockingly accurate answers. For some, it's a clever way to mess with friends in the WhatsApp group. For others, it's a wake-up call about how exposed we are, even when we think we've stripped out the clues.

How it works

ADVERTISEMENT

The latest trend rides on the back of OpenAI's o3 and o4-mini models, which quietly added the ability to "reason" through images. They don't just look at objects and colors. Instead, they crop, zoom, rotate, and draw inferences from architecture, shop signage, menus, street markings, and even graffiti styles. They can also kick off web searches to verify what they think they're seeing.

It's the closest thing yet to giving AI street smarts. Many users quickly began prompting ChatGPT to "pretend it's playing GeoGuessr," the popular game that drops players in a random spot on Google Street View and challenges them to guess the location. Except in this case, the AI works with casual images, no coordinates, no data, just the raw pixels.

geoguessr as a traffic sign
By Cybernews.

Upload a picture of your lunch at a café and ask it where you are. Sometimes, it'll just say "London." Sometimes, it'll say, "Battersea, London, near the old Battersea Power Station." Or even the precise restaurant or location.

One reason this has taken off is that it's a fun trick to use in group chats. Send ChatGPT a photo of your friend at brunch and ask it where the picture was taken. When it spits out the name of the actual venue or even just the right neighborhood, it can be hilarious and slightly spooky in equal measure.

You get to watch their reaction in real-time. People do this with vacation photos, night outs, and obscure local corners. It's become a sort of AI-powered parlor trick: "Geoguess this." And the punchline often lands. But the fun also reveals something more profound. You no longer need to be a digital forensics expert to trace a location. You need a copy of the latest ChatGPT.

The doxing risk hiding in your newsfeed

ADVERTISEMENT

The same features that make this latest AI trend fun also make it easy to misuse. Say you post a photo to Instagram. Someone could screenshot it, upload it to ChatGPT, and get a general location or a specific enough location to narrow it down to a block, café, or building. Now, apply that to someone sharing an image from their home or workplace or livestreaming it publicly.

The model doesn't use metadata. It doesn't need GPS. It doesn't pull from previous chats. It's just looking at what's in the image. That means even photos you thought were "safe" are up for analysis.

Doxing or doxxing refers to the action of searching and publishing private or identifying information about an individual online with malicious intent. All it takes is one image with enough environmental detail, and someone could guess where you live, where you go to school, or where you spend your afternoons.

Even if you strip GPS tags and do not reveal personal details, AI can fill in the blanks at an alarming speed. The potential threats are not hypothetical. We have already seen how stalkers have abused the capabilities of AirTags and the rise of stalkerware in abusive relationships. The thought of an ex-partner monitoring every photo you share online and knowing your exact location could quickly become the stuff of nightmares.

OpenAI's response

After this trend picked up steam, OpenAI released a statement to Tech Crunch, trying to thread the needle between capability and caution. The company says the models were designed to help with accessibility, research, and identifying locations in emergencies.

OpenAI claims there are built-in protections and that ChatGPT is supposed to refuse requests for private or sensitive information, avoid placing individuals in images, and prevent misuse through active monitoring. But it's unclear how adequate those guardrails are.

Users have reported instances where the AI shut down a request, especially if it suspected the image came from Google Street View or was likely to involve a private residence. But other times, it went full detective mode without pause.

google maps logo in front of a private home
By Cybernews.
ADVERTISEMENT

One major issue is that OpenAI doesn't explicitly mention reverse photo geolocation in its safety documentation. That's raised eyebrows among researchers significantly since previous AI models were often delayed or gated behind access walls to give time for safety testing.

According to a report in the Financial Times, the turnaround time for vetting new models has shrunk dramatically. Earlier models were tested over months. The newest ones? Sometimes, just a few days.

vilius Gintaras Radauskas Paulina Okunyte Niamh Ancell BW
Stay informed and get our latest stories on Google News

What ChatGPT can and can't do

You know that blurry photo you took outside your favorite café? Or did you post the casual snap at a friend's birthday in Stories? With these new tools, someone can zoom in on the signage behind you, read the font on the window, check the style of the brickwork, and figure out where you are.

Even when people tried to "trick" the model with upside-down or partial images, it could often rotate and analyze the content accurately. But before we get too carried away, the current model has limits. It can't access EXIF metadata. It can't run a reverse image search like Google Images or TinEye. And it doesn't hook into APIs like Google Maps or Street View.

What it can do, however, is process a visual like a seasoned street detective. It analyzes fonts on signs, styles of fencing, bus stop design, license plate shapes (not numbers), and even weather patterns. Users have uploaded images from remote areas, and the model has still managed to get within a few miles of the correct spot.

It's not perfect. It sometimes gets stuck. It occasionally spits out wrong guesses. But even the misses are often eerily close.

Should you be worried?

ADVERTISEMENT

It depends on how you use social media and how much you care about your digital footprint. The core issue isn't just that AI can do this. Almost nobody expected it to be this good, this fast. While OpenAI says the tech is meant to support positive use cases like emergency response, those aren't the scenarios gaining traction online.

What's gaining traction is viral "geo guess this" videos and experiments in geolocation that toe the line between clever and creepy. Privacy experts suggest that tech like this could be used to triangulate someone's general location without their knowledge. Add in screen recordings, livestreams, or images scraped from the web, and things get murky fast.

How to protect yourself

There's no single fix, but increasing your awareness of the risks caused by your digital footprint will go a long way. Being streetwise with your sharing by delaying posting until after you've left a location, cropping or blurring background features in images, is a great place to start.

Every photo online could reveal more about you than you think. The Uber-paranoid can use photo editing tools to remove or mask context clues or avoid identifiable buildings or signage. But most people reading this will think it's too much of a hassle or will hurt the authenticity of their post.

Finding a middle ground also means accepting that this isn't the first time AI has raised privacy concerns, and it won't be the last. But what makes this different is how ordinary the use case is. No hacking skills. No technical background. Just a user, a photo, and a prompt.

If we zoom out from the hype of another viral trend, we have been served another timely reminder of how powerful these models have become and how the line between "cool new feature" and "privacy risk" is thinner than ever.

So next time you share that selfie online for the group chat, ask yourself what ChatGPT might see that you don't.

ADVERTISEMENT