Messing With Google Image Search

I noticed a thread today on WebmasterWorld talking about the new(ish) drag-and-drop Google Image search function. You can take an image from your computer, drag it to the search box on Google Image Search, and Google will do it’s best to identify the picture you uploaded. It works by identifying color palates and patterns and then matching those to images it “knows” about. Google has struggled for a long time to identify images on the web – simply because images contain so much content but are so challenging for a robot to make sense of.

Back in 2010 I talked about Google launching Boutiques.com. Boutiques.com (a Google property) is a neat site where you can put together an outfit that match your taste and then Google algorithmically builds a wardrobe for you and even lets you buy the items online. Google does this based on an image content recognition algorithm that they acquired from Like.com. Since I knew about this product and the acquisition of the algorithm last year, I wasn’t too surprised with the launch of the new Google Image Search feature.

When I first started playing around with the new feature it was pretty poor. It did a great job of matching color pallets and a wonderful job of identifying popular artwork, buildings, skylines, celebrities, etc. It had a really hard time identifying flowers – which is what I really wanted it to be good at. Today I was dragging in pictures I had taken of flowers to see how well it’s learned over the last month. It’s learned to identify distinct flowers (like stargazer lilies) but it’s still not any good at identifying and matching specific flowers. It gets the colors right, but doesn’t identify the flower by name.

While messing around today I came up with an amusing result. Google Image Search matched a picture of a rose I took with a Professor from UC Riverside! I chuckled a little but then started looking into why it had done it and I was amazed at how well it’d picked out similar shapes contained in the two pictures. However, had Professor Bazhenov been wearing a different shirt and not standing in front of trees I don’t think he would have showed up at all.

Check out the images below to see what I uploaded, what Google returned, and how I think the datapoints on the two pictures match that made Professor Bazhenov seem like a good choice to the robot.

Wild Rose used in Google Image Experiment
Wild Rose used in Google Image Experiment

Professor Bazhenov from UC Riverside
Google thought Professor Bazhenov from UC Riverside matched my rose.

Data points I think Google used to match the professor and the rose.
Data points I think Google used to match the professor and the rose. (click for a larger view)

Keep in mind that every time I upload the image of the rose I get a different set of related images including other people. I find that in itself interesting because Google is all about consistency.  Maybe the results will become more stable as Google “learns.”

A special thanks to Professor Bazhenov for being a good sport. The Professor studies computational neuroscience and other very difficult to pronounce areas of human biology to help make our lives better.