First announced as Project Natal in 2009 Microsoft showed off family gaming and revealed that it would release a kit to select developers so that they could get started making games.
It launched in April 2010 as a gaming system connected to the X BOX with sports, drawing and other physical games.
Adafruit simulateously offered a $1,000 reward to the first person who released open source software for it. Microsoft responded by suggesting there were hacking safeguards in place and that the technology was tamperproof. Adafruit bumped the reward to $3,000 and in October 2010 it was won by a hacker named Hector - now the technology is used everywhere from hospitals to art exhibits for all kinds of different purposes.
But how does it work? Three lenses work to emit and recieve light and images and collect the data to interpret shapes and gestures.
They do so using infrared light emitted by the Kinect bar and bounced off of all of the objects in the room ... The brighter the light, the closer the object. The emitted light is encoded so that the cameras can interpret it clearly when it is bounced back.
The software built into the Kinect is programmed to recognize and interpret over 200 gestures and poses. Because of this it can predict where your body is likely to go and use that to support its games etc.
The Kinect also has voice recognition - the bar has four built in speakers and, in XBOX mode, one need only give commands with XBOX first to use the feature. For example - XBOX go home, XBOX play disc, XBOX bing - search OCAD....
SO now Microsoft has embraced the hacks for its technology and is now creating a Kinect V2 and providing developers with more resources to build applications beyond gaming...
This was my first experience with the Kinect - it was amazing and this video doesn’t do it justice but it was here in Toronto in fall 2011...
An interactive media installation created in collaboration with Mike Allison. A stretched sheet of spandex acts as a membrane interface sensitive to depth that people can push into and create fire-like visuals and expressively play music. Will be used in the performance piece Mizaru created with Kiori Kawai
The project aims to create a moving sculpture from the movement of a real person This is a dancer who was asked to visualized the music. She was recorded using the Kinect, in which the intersection of theimages was later put together to a three-dimensional volume (3d point cloud), and they were able to use the collected data throughout the further process. The effect is a digital body, consisting of 22 000 points, thus seems so real that it comes to life again.
This is a music video created using one of the libraries created by Daniel Schiffman. He’s currently working on building a Kinect library for processing and some of it is out right now.
The band has documented its process here for people to have a look at...
There are loads of amazing projects - but here’s a look at some of the resources you might want to check out...
I’ve got a Kinect here and so I thought we’d really quickly have a look at what it looks like for us...
translating information through gesture & sound