First off: does everyone know what Google Glass is? Short version is that it’s a pair of glasses (or, really, just frames) with a camera mounted on them and a little see-through display that’s up in the corner of your vision. You can take pictures or video, and do things like see text messages in a “heads up display.”
Okay, so Google is now apparently going to try to pay people to figure out what to do with Glass. Which kind of points at the entire problem with Glass, which is that it seems a little bit useless.
Geeks have been talking excitedly about “augmented reality” for maybe a decade and a half now. The science-fiction vision of it, perhaps best explored in Vernor Vinge’s novel Rainbows End or Charlie Stross’ Halting State, is anywhere between stepping into a video-game, where illusory monsters and terrain are overlaid across reality, to adding information to everything. Proponents of augment reality talk excitedly about facial recognition software putting the names of people above their heads, so you never have that awkward moment of “Hey…. you!” Or seeing reviews of a product whenever you glance at it. Or step-by-step instructions for fixing your car engine in which the part you’re supposed to be touching flashes, and ghostly animations show you the exact motions that you mimic.
The thing is, though, Glass can’t do any of that. Its data shows up in a small amount of space in the upper-right part of your field of view. Any image in the small area it can show doesn’t mask out whatever’s behind it. There is no giant database of facial recognition data to search people’s names for, and if there were, there would be other technical challenges. Image searching a product to find the right reviews is a slow and awkward process, filled with errors.
I don’t have Glass, but I have tried out other very-early-stage augmented reality apps, such as Layar and Goggles. The problem with them is that the “augmented reality” parts of them don’t actually add anything to your life.
For example: one thing that Layar does relatively well is locate restaurants and lets you see ratings for them. This is occasionally useful. Sometimes, I want to get some food and I’m in an area I don’t know well, or would be interested in trying a new restaurant in. With augmented reality, the concept is I could just glance at a restaurant I’m walking past, and see, say, information from its yelp page. Which is moderately useful.
But what’s more useful is busting out the Yelp app and doing a location-aware search, getting the results from a top-down view. That way, if the restaurant I’m right next to has a 3.5 star rating, but the restaurant around the corner, that I’m not looking at, but is less than two minutes away by foot, has a 5 star rating, I know about it and can go and get the really good, not the second best.
For augmented reality information to be really, really useful, it has to be so contextual that it’s there before you even know you want it. If I have to stop and dig around for the information that I want, then I might as well pull out my phone and do the more useful search.
But that kind of extreme context-sensitivity requires more than just some fancy glasses, it involves incredible AI work. I don’t want to see restaurant ratings most of the time, or people’s names, or product reviews. I only want them a small minority of the time. And I don’t want to pause and switch apps on Glass, or I might as well pull out my phone and do the better search.
So in the mean time, Glass is kind of casting around for what to do. It will succeed or fail as “a camera that’s always pointed at what you’re looking at” and “a hands-free way to read text messages,” not as an augmented reality device. And the Glass Collective mentioned at the beginning of this article is a tacit admission that Google can’t actually think of what to do with the hardware besides make it a camera.