UPDATED 14:51 EDT / MARCH 28 2011

Kinect Benefits Surgeons in Toronto via Hands Free Image Manipulation

kinect-medical-imagingMedical imaging enables surgeons amazing insights into the inner workings of the body without having to cut into a person—it also permits them to be more precise when it comes time to make those cuts. As a result, imaging in the OR can be an extreme boon.

The problem? Germs. Since surgeons need to be able to look at not just X-rays, but CT and MRI images, they also need to be able to manipulate them so that they can get a good look. An assistant for this can ameliorate the issue of touching a non-sterile surface, but that involves a bit of acrobatics in the OR. There’s got to be a better way, and there is!

A hospital in Toronto, Canada has come up with a new solution: the Microsoft Kinect.

The Winnipeg Free Press covers the story for us, and gives a lot of insight into how this user interface provides an amazing resource:

Surgeons typically have to leave the sterile field around the patient to pull up images such as MRI or CT scans on a nearby computer.

They then have to go through a meticulous cleanup before returning to the area to make sure they don’t bring in any bacteria that could harm the patient.

It can take up to 20 minutes to clean up each time a doctor consults an image, said Dr. Calvin Law, who helped integrate the technology into the operating room.

Cutting down the time in the OR would relax a great deal of stress on surgeons and patients alike. It would reduce the amount of time that patients spend opened up, closing up windows for infection and potentially reducing healing time.

We’ve seen the Kinect involved in medical imaging and surgery already with the University of Washington addition for robotic surgery. As this technology becomes more widespread, it will probably gain a great deal more innovation as developers pile on. In fact, fine gesture based controls—such as those developed by MIT for Chrome—would be extremely beneficial in the OR. Sterilized screens could be kept near the surgeons, or they could wear goggles/monocles that provided imaging that they could then interact with in an augmented reality fashion. Never having to touch a screen or the goggles in order to interact with images projected into their field of view.

Better yet might be real-time image feeds from small cameras already in the operating theater, giving the surgeon a better idea of what’s going on between, underneath, or around organs in their surgical field.

Gesture support for manipulating images is also only the first most useful effect of the Kinect. Eventually we might even see 3D imaging being brought to bear with multiple Kinect cameras looking down at a patient, producing their own 3D imaging, comparing the skeletal movements of the surgeon with known models generated via CT, and producing advice or warnings to the surgeon.

The innovative options are huge when it comes to complex surgeries that require a lot of task-oriented understanding of the locations of various anatomical elements or intricate imaging. This sort of technology could become the forefront of guiding swifter, safer, and more advanced medical procedures.


A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU