We essentially have wearable computers, an Android smartphone of sufficient power can do a great deal of computing and if it can’t succeed all on its own it can have a cloud-computer do the heavy lifting and beam the answer back over the Internet. The next step is bringing that projection out of the screens of our phones and into our visual field: that’s what Google Glasses and Kinect Glasses will do.
A Japanese YouTube user, moniker “alsionesvx”, has posted a video of him using augmented reality glasses that take advantage of the Kinect depth-of-field sensing to project a virtual anime girl into his visual field. Below, you’ll see computerized J-pop star Hatsune Miku—she would be called an idoru by any fan of William Gibson, essentially a computer-generated media star—take a walk in the park with alsionesvx. Nobody else can see her because her image is being projected on the inside of his eyewear.
This particular extension of humanity could go on my 5 Possible Google Glasses Innovations list that tops off my favorite science fiction become reality effects that we’re probably going to start seeing soon. Virtual agents, otherwise what we might call a virtual secretary, would be an expert agent capable of parsing information from numerous news sources, distilling out what the user wants to know from that days worth of plumbing the depths of the Internet with a net, and then presenting them to the user. This is the purview of Artificial Intelligence but the interface will always be messy.
So along comes the “virtual assistant” style of non-person, a cartoony or perhaps semi-realistic person who appears in the users field of view and interacts with them somewhat like another person might. Add English Language Processing, voice recognition, and a good text-to-speech engine, and you’ve got all the makings of a virtual secretary who can be interacted with just like another person. Top on a personality and you’ve got the halo-filled day dream of every computer geek who ever read near-future science fiction cyberpuk.
As we see in the video, Miku is being projected over the landscape and can even interact with it—at one point we see her vanish behind a tree. The user can also directly interact with her, bopping her on the head or swishing her tie; but this also means that she could be “handed” things and interacted with a sort of semi-physicality. She could even point out objects in the environment, guide users from place to place, or provide extra context (using meta data provided by a region) or even a much friendlier GPS than a cold-edged outline of the world with arrows.
Her presence is invisible to onlookers, however, so it would be like the next level of people who talk to hidden Bluetooth headsets—though truthfully someone wearing Google Glasses won’t quickly be mistaken for a lunatic (even if they end up flailing about madly at inappropriate times.)
Once the price of Google Glasses comes down, and competitors enter the market, we’ll probably see the first augmented reality assistants. In the past, they haven’t done very well on computers—if anyone recalls Microsoft Bob or Clippy—but with sufficient AI and thinking about what makes a virtual assistant useful, people might be seeing apps like this being of widespread use in conjunction with augmented reality glasses.
Of course, we’ll probably also see the first anti-glasses-while-driving laws crop up alongside warnings bout texting and talking on cell phones about the same time as that cultural shift.