It’s become apparent that satisfying my initial goal of getting the Kinect sensor to recognize me as I sit in my wheelchair involves fixing the shadow problem(see the diagram at the bottom of the Appendix for more on this). I’m hoping this can be solved by changing the angle of the infrared light, as I think this can be done through code. I’d like to thank Alexander for the suggestion about changing the angle of the infrared light to create a cut out of sorts which excludes the wheelchair. I’m definitely going to explore this as an option in the next Sprint. Also the idea of heat sensing that Sam came up with is definitely worth investigating. I think it would involve some hardware manipulation, though, so I’ll take a look at that as a possibility as we go further into the project. I intend to focus a bit more on the software solutions first.
Sprint 1 retrospective
Kinect Range Of Motion Concept<br />I want to create a range of motion concept to demonstrate adaptations of the Kinect sensor for Xbox 360 so that those with limited mobility, or people playing from wheelchairs can take full advantage of the sensor’s capabilities. <br />
Rundown of Sprint Activities<br /><ul><li>Kinect motion sensor obtained; the original SDK will not be available until this spring
Search program specific tutorials for sensor manipulation:
Daniel Shiffman (see Bibliography for source) describes using a programming language called Processing to control the camera and manipulate the data streams. I may start with Processing, but I think the overall scope of the project is going to require a more mainstream product. I’ll probably end up using Visual C ++. </li></li></ul><li>Rundown of Sprint Activities Cont’d.<br /><ul><li>Finding other people working on similar projects:
Most of the Kinect hacks I have discovered thus far have to do with using the Kinect in non-gaming ways like teaching piano and various artistic endeavors such as painting on walls The search for insight continues.
More is expected to become available when the official SDK is released, but I am just skimming the surface this week of image manipulation and motion libraries; I’ve never done this type of programming before. </li></li></ul><li>Project Flowchart<br />
Current Progress<br /><ul><li>Kinect installed on PC and image can be seen on computer screen
Research suggests that holes in body recognition image is due to shadows
Found code to manipulate sensor position and data streams
Flowchart for the project developed</li></li></ul><li>Current Feelings About The Project<br /><ul><li>On one hand I feel like I’m flying by the seat of my pants because I’ve never done this scrum process before. Trying to learn a new methodology is always intimidating. Add to that the fact that I’m delving into areas of programming that I’ve never seen before; much less worked with, and it’s easy to see why I’m a bit terrified.
On the other hand, I’m really optimistic that this project on a whole can be pulled off. I’m going to learn a ton of valuable things that will carry over into employment, and if this project goes as planned the ultimate victory will be that anyone who wants to will be able to experience the immersion Kinect has to offer.</li></li></ul><li>Things That Went Well<br /><ul><li>As I’ve already stated, the red letter part of the project so far has been being able to get the Kinect up and running on the PC and being able to verify one of the theories that I had about why the sensor has trouble with registering people in wheelchairs or those with mobility issues in general. Again in that respect I have to tip my hat to Alexander and Sam for giving me a couple of approaches to explore with this.
Another really big plus thus far was being able to find code what will manipulate the sensor data streams and position. That’s going to help a lot as we go. Overall just the feeling of having this thing move forward is a plus in itself. </li></li></ul><li>Things that didn’t go so well<br /><ul><li>Finding out that Microsoft has yet to release the official SDK was a bit of a downer. I was hoping to be able to use those tools
The fact that I haven’t been able to find people working on similar projects means that I’m going to have to dig a little deeper to understand different aspects of the project moving forward.
The overall confusion about this Sprint process (since it’s the first time I’ve been through it) combined with the possibly grandiose scope of my project has led to a few headaches. </li></li></ul><li>Things I’ve Learned About Myself As It Relates To Time Management Thus Far<br /><ul><li>It’s very possible that I wait too long to ask questions.
Perhaps I need to spend most of my days searching through code rather than worrying so much about graphic data.
I’m in the initial wow phase when it relates to the overall project. I need to continue to focus on the smaller parts I break it into. </li></li></ul><li>Future Creative Ideas<br /><ul><li>The main thing I’m going to do in the next sprint is continue to scour the code repositories so that I don’t have to reinvent the wheel for small parts of the project. Not only will that save time, it will keep my head from smoking like an exploded engine.
I’m definitely going to explore the idea of changing the angle of the infrared light source to see if I can create a “cut-out” of myself that’s independently recognizable from the wheelchair.
If time permits, I might take a look at the heat signature idea that Sam proposed and see if there’s something where we might perhaps be able to re-encode the depth image and have the coloring of body regions be different for those areas where animate objects are present than from inanimate objects. That one might be a long shot, and it may need to be implemented later in the process if it does in fact require hardware adaptations. </li>