image002

Rendering 3D content in a physical tangible way with Kinect

A few years ago, Microsoft launched the Kinect – a motion sensing input device for the Xbox 360 games console. Whilst an interesting bit of kit, in the world of video games it amounted to little more than a gimmick. Sure, the technology was pretty amazing and it certainly had potential – but games developers failed to really do much with it beyond using it to allow you to interact with digital animals, do a spot of virtual archery, or provide simple voice commands (though I do think its ability to detect when a player swears and hit them with a penalty in FIFA 13 was kinda fun).

However, as a stand alone piece of technology, it blossomed. Its relatively low price point and easy to access its SDK (software development) gave tech gurus, garage modders and script kiddies alike the tools to do some pretty amazing things. Since then, we’ve seen all sorts of crazy, unique hacks and uses for the Kinect toolset. Everything from interactive virtual dressing rooms, Minory Report style user interfaces, even a real time lightsabre demo. All fun stuff, but they all tend to be a different implementation of the same concept.

Most of these demos all go in the one direction – they were translating the physical, into the virtual. Your gestures and interactions in the real world were translated into a virtual world or interface. To my knowledge, no one had yet figured out how to go the other way – that is, take translate something from the virtual into the physical. Until a team from MIT Media Lab’s ‘Tangible Media Group’ got started, and breathed life into their ‘inFORM’ project. From the Tangible Media Group website:

“inFORM is a Dynamic Shape Display that can render 3D content physically, so users can interact with digital information in a tangible way. inFORM can also interact with the physical world around it, for example moving objects on the table’s surface. Remote participants in a video conference can be displayed physically, allowing for a strong sense of presence and the ability to interact physically at a distance.”

inFORM – Interacting With a Dynamic Shape Display from Tangible Media Group on Vimeo.

The result truly is amazing, and one of the best implementations of consumer technology that I’ve seen. Not only that – it probably has the most potential. By using the Kinect, in conjuction with a projector and a rather complex looking system of actuators and pins, the team has been able to create a physical representation of a digital image. The grid of pins (reminiscient of those pin cushion like 3D picture frames that you’d awkwardly smoosh your face into) functions much like an array of pixels, but operating in three dimensions. The actuactors control the height of these pins, and when combined with the Kinect are able to form physical shapes that move and respond to a users input. These shapes can in turn manipulate other physical objects, creating a pretty impressive demonstration.

Yes, its somewhat rudimentiary and possibly quite cost prohibitive at this stage. But this is very early days, and the potential for this sort of technology is fairly mindblowing. Imagine combining this sort of technology with the traditional online shopping experience, and suddenly you’ve got the ability to try on a pair of physical shoes using a digital recreation of your foot. Obviously the sensory feedback isn’t there (you wouldn’t be able to ‘feel’ how the shoe fits) but hey, that can’t be too far off. Tangible Media Group are currently exploring a number of applications in the areas of Geospatial data, such as maps, GIS, terrain models and architectural models. Check out more details over at the Tangible Media Group page:

http://tangible.media.mit.edu/project/inform/

http://tangible.media.mit.edu/project/inform/

Leave a Comment