![](https://d15shllkswkct0.cloudfront.net/wp-content/blogs.dir/1/files/2015/08/PLACEHOLDER-SiliconANGLE.png)
![](https://d15shllkswkct0.cloudfront.net/wp-content/blogs.dir/1/files/2015/08/PLACEHOLDER-SiliconANGLE.png)
Executives from Microsoft Corp. at MIX11, the 2011 Microsoft developers conference in Las Vegas, took their keynote and multiple panels to speak about augmented reality developments in their devices. Specifically, I’m interested in covering innovations built into the upcoming next-gen Windows Phone OS and the PC SDK for the Microsoft Kinect peripheral.
Not really or willing to sit on their laurels, Microsoft delivered a deep look at changes to the Windows Phone that gives improved tools for accessing a performance profiler and sensor simulation with the idea of building integrated and high-performing applications.
Aside from adding multitasking for background processing, there will also be some new libraries. In fact, they list one of the new advantages for developers as “access to the camera and Motion Sensor library so developers can build apps that incorporate device hardware and build augmented reality experiences.”
These developments look to be Microsoft’s way to draw in more game and augmented reality advancement shops to use the Windows Phone OS. Since the platform is already a strong source for some games and will be going to Nokia Phones, this might deliver a heightened interest.
The upcoming Kinect for Windows SDK adds a panoply of new features and innovations to the already extremely popular video game peripheral.
Adding more robust skeletal tracking to an already amazing system would provide a great deal of draw for animators. This would also give the Kinect a great deal of new innovation in the augmented reality and gesture control capabilities by offloading some of the processing power from the computer. The microphone array would add 3D sound capabilities, permitting the Kinect to tell who might be speaking (or clapping, tapping, whatever) when it has more than one person in its field of view—this might actually be hugely beneficial for Avatar Kinect.
As we’ve seen with the Nintendo 3DS augmented reality, being able to gauge the depth of an object is extremely important for being able to display a proper-perspective 3D image on it. Thus, the Kinect camera might be useful for “seeing” objects in its field of view and then displaying things on them. Think along the lines of how people might be able to develop real-life Yu-Gi-Oh style card games where the Kinect camera watches a table and then generates effects on-screen atop physical cards.
As motion-tracking and facial recognition algorithms develop better, we might even see a mixture between projects for the Kinect expanding to devices like the Windows Phone. I am particularly intrigued by the possibility of seeing Avatar Kinect for the mobile devices. Already people have a particular affinity for faces but not everyone has a data plan necessary to broadcast live visuals of themselves—but facial tracking is much less data than even 30 frames-per-second of bad video. An avatar of me, speaking to my friend, displaying on her phone raising eyebrow, smirking, etc. might provide a sort of empathy that phones currently lack.
If nothing else, it’ll probably be very popular in Japan!
THANK YOU