

Researchers at Microsoft show that they’re really thinking hard about how humans interact with computers and how the Kinect has provided a surge of innovation in this direction. Natural User Interfaces should be the next-big-thing and include a lot of things science fiction has brought into our zeitgeist like the Minority Report user interface (provided by Kinect.)
Steven Bathiche, Director of Microsoft Applied Science Dept, shows off quite a few wonderful developments in this field—and it lead me to wonder if some of this might find its way into the the cloud of developers who will benefit from the release of the Microsoft Kinect SDK.
While the Microsoft Kinect gets featured in the video, it’s not the only star, but I suspect that the video processing capabilities that it’s taught Microsoft certainly underpin the inventions demonstrated. There are five total demonstrations in the video.
The first concept provides an interactive solution that is the next step after touch, allowing a computer screen to “see” what’s directly in front of it. Right now, most of this is provided by having cameras pointed at the region in front of the screen—but Microsoft is developing a camera that can hide behind the screen instead of next to it. This will allow for very thin camera interfaces (and possibly a use in mobile devices.)
The next concept approaches the idea of a 3D stereoscopic display—which would provide the illusion of a hologram. The holographic effect is produced by detecting the eyes of the viewer and preparing the screen for the viewer; this allows a screen to produce 3D images without requiring glasses. We’ve already seen some user interface ideas similar to “holographic” technology at London Luton Airport. The ability for these simulacra to track eyes on them could be extremely beneficial.
In fact, the next demonstration in the video does exactly that: it provides a method that uses a Kinect camera to track multiple users. The screen then directs an image at each individual person.
Finally, there’s a bit of really interesting invention showing how future video conference calls and virtual reality interfaces might allow for the movement of human beings in front of the screen. Using a Kinect camera, researchers take into account where the eyes are and where they’re looking that causes the screen to act as if it’s a virtual window pane. For example, leaning towards the pane causes the image to change in a fashion that mimics the widening of view through a window, and leaning away narrows it. Side-to-side shifts the view with anisotropic rendering to both widen the view on the far side and narrow it on the near side (just like peering through a window.)
This sort of tracking, if enabled in mobile devices, could make them a lot more accessible by providing a sort of 3D virtual “space” behind the screen that the user could interact with my viewing the mobile screen as if it’s a window into that “space.” We could see a lot of fun features for mapping like the 3D Google Maps apps and maybe we’ll see this technology licensed by the Nokia-Intel coalition to produce better 3D screens.
Although it might seem strange seeing people tilt their Android phone around while gazing into it intently, it might just be a more intuitive interface than tapping and sliding. After all, physical maps fold out and expand into our peripheral vision already—although, what we don’t do is stare at them through a small widow in space.
Found this video via the wonderful KinectHacks.net site who deserve credit for finding it.
THANK YOU