There’s a considerably regarding new pattern going viral: Persons are utilizing ChatGPT to determine the situation proven in footage.
This week, OpenAI launched its latest AI fashions, o3 and o4-mini, each of which may uniquely “purpose” via uploaded photographs. In apply, the fashions can crop, rotate, and zoom in on photographs — even blurry and distorted ones — to totally analyze them.
These image-analyzing capabilities, paired with the fashions’ skill to go looking the online, make for a potent location-finding software. Customers on X shortly found that o3, particularly, is kind of good at deducing cities, landmarks, and even eating places and bars from refined visible clues.
Wow, nailed it and never even a tree in sight. pic.twitter.com/bVcoe1fQ0Z
— swax (@swax) April 17, 2025
In lots of circumstances, the fashions don’t look like drawing on “reminiscences” of previous ChatGPT conversations, or EXIF data — the metadata hooked up to photographs that reveal particulars equivalent to the place the picture was taken.
X is crammed with examples of customers giving ChatGPT restaurant menus, neighborhood snaps, facades, and self-portraits, and instructing o3 to think about it’s enjoying “GeoGuessr,” an internet sport that challenges gamers to guess places from Google Road View photographs.
this can be a enjoyable ChatGPT o3 characteristic. geoguessr! pic.twitter.com/HrcMIxS8yD
— Jason Barnes (@vyrotek) April 17, 2025
It’s an apparent potential privateness problem. There’s nothing stopping a foul actor from screenshotting, say, an individual’s Instagram Story and utilizing ChatGPT to attempt to doxx them.
o3 is insane
I requested a good friend of mine to provide me a random picture
They gave me a random picture they took in a library
o3 is aware of it in 20 seconds and it’s proper pic.twitter.com/0K8dXiFKOY— Yumi (@izyuuumi) April 17, 2025
After all, this might be achieved even earlier than the launch of o3 and o4-mini. TechCrunch ran a lot of photographs via o3 and an older mannequin with out image-reasoning capabilities, GPT-4o, to match the fashions’ location-guessing abilities. Surprisingly, GPT-4o arrived on the similar, right reply as o3 most of the time — and took much less time.
There was at the least one occasion throughout our transient testing when o3 discovered a spot GPT-4o couldn’t. Given an image of a purple, mounted rhino head in a dimly-lit bar, o3 accurately answered that it was from a Williamsburg speakeasy — not, as GPT-4o guessed, a U.Ok. pub.
That’s to not counsel o3 is flawless on this regard. A number of of our checks failed — o3 bought caught in a loop, unable to reach at a solution it was fairly assured about, or volunteered a flawed location. Customers on X famous, too, that o3 may be pretty far off in its location deductions.
However the pattern illustrates among the rising dangers introduced by extra succesful, so-called reasoning AI fashions. There look like few safeguards in place to forestall this kind of “reverse location lookup” in ChatGPT, and OpenAI, the corporate behind ChatGPT, doesn’t handle the problem in its safety report for o3 and o4-mini.
We’ve reached out to OpenAI for remark. We’ll replace our piece in the event that they reply.