Google’s new ‘superhuman’ AI can tell you where any photo was taken

Google’s new artificial intelligence loves pictures and has a great memory. The new system, named PlaNet, is designed to identify the origins of images — the locations where they were taken — by looking at photos, looking for clues, and cross referencing them with other photos its seen.

According to Discovery, PlaNet uses a deep-learning neural network, which means that the more images it sees, the smarter it gets. As it tries to identify the origins of photos, or “geotag” them, PlaNet gets better at identifying the origins of photos.

Through trials and tests, Google has already determined that PlaNet is better at geotagging than any human being. These same tests revealed that PlaNet is currently better at geotagging than any other program as well.

According to the research team, “PlaNet is able to localize 3.6% of the images at street-level accuracy and 10.1% at city-level accuracy. 28.4% of the photos are correctly localized at country level and 48.0% at continent level.”

Google devised a way of testing PlaNet’s effectiveness compared to a human beings with a game called Geoguessr. Google pit PlaNet against well-traveled people and had them guess the location of a random street-view photo.

“PlaNet won 28 of the 50 rounds with a median localization error of 1131.7 km, while the median human localization error was 2320.75 km,” said the research team.

PlaNet works by examining a photo and cross-referencing it against other photos that it knows, trying to find photos form that particular area. It has divided the world into thousands of geographic cells, and once it sees enough photos, it tries to figure out which cell the photo’s location belongs to.

The best part is that PlaNet only requires 377 MB, so it will be able to fit on smartphones.