Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Lidnug Presentation - Kinect - The How, Were and When of developing with it

1,552 views

Published on

These are the slides from my LIDNUG presentation on Developing with the Microsoft Kinect using the Kinect for Windows SDK. You can find the presentation recording at http://www.youtube.com/watch?v=0arzMSlqnHk

Published in: Technology
  • download this amazing full version 100% working and virus proof file without any survey from below link. just download it and enjoy.
    download from here:- http://gg.gg/h26m
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here

Lidnug Presentation - Kinect - The How, Were and When of developing with it

  1. 1. KINECT – THE HOW,WHEN, AND WHERE OFDEVELOPING WITH ITPHILIP WHEATMANAGER, CHAOTIC MOON LABSPHIL@CHAOTICMOON.COM
  2. 2. WHO I AMPhilip Wheat – Currently managing the Labs Division ofChaotic Moon (http://www.chaoticmoon.com/Labs)Former Evangelist with Microsoft – Architecture andStartups.Active in the Maker Movement – it’s about Bits AND Atoms,not Bits or Atoms.My neglected blog can be found at http://PhilWheat.netI can be found on Twitter as @pwheat
  3. 3. WHY THIS TALK?There has been a lot of talk about next generation interfaces– but most interface talk today still seems to be aroundHTML5/Native.User interaction is very focused today on touch interfaces –but this limits various scenarios and usage models.We’ll look at the components of interacting with the userthrough Kinect today and help you start looking at newscenarios.
  4. 4. THE PROJECTS
  5. 5. DEVELOPING WITH KINECTTHE HOW
  6. 6. HARDWAREVERSIONSRemember, there are two versions of Kinect Hardware –Kinect 360 and Kinect for Windows.• Kinect for 360 • Can be used for SDK development • Some Kinect for Windows Functionality not supported • Not supported for Microsoft SDK production• Kinect for Windows • Can be used for development and deploymen • Full Kinect for Windows Functionality (Near Mode, Additional resolutions) • Supported for production
  7. 7. HARDWARECAPABILITIES• Cameras • RGB Camera – 320x240, 640x480, 1024x768 (KFW) • Depth Camera – 320x240, 640x480 • Depth Camera – .8M – 4M (standard Mode) – .5M – 3M (near Mode)• Audio • Microphone Array • 4 Microphones • Audio directional detection • Audio detection steering• Tilt • Tilt from -27 to 27 degrees from horizontal (accelerometer)
  8. 8. SOFTWARECAPABILITIES• Cameras • RGB Frames – event driven and polled • Depth Frames – event driven and polled • Skeleton Tracking – event driven and polled • Seated Skeleton Tracking (Announced for KFW 1.5)• Audio • Voice Recognition in English • Voice Recognition – French, Spanish, Italian, Japanese (Announced for KFW 1.5)
  9. 9. OTHER HARDWAREITEMS• Kinect needs 12V for operations – this requires a non- standard connector. For use with a PC, Kinect requires a supplemental power supply which is included for the Kinect for Windows hardware but is not included with Kinect 360’s that are bundled with Xbox slim models.• Kinect tilt motor is update limited to prevent overheating.• Lenses are available to reduce focus range – but produce distortion and are not supported.• Kinect has a fan to prevent overheating – be careful of enclosed areas.• Kinect contains a USB Hub internally – connecting it through another USB Hub is not supported and will only work with certain Hubs (through practical testing.)
  10. 10. CONNECTING TOYOUR KINECTSome Key code –• KinectSensor.KinectSensors.Count - # of Kinects.• kinectSensor.Status – Enumerator of • Connected (it’s on and ready) • Device Not Genuine (not a Kinect) • Device Not Supported (Xbox 360 in Production mode) • Disconnected (has been removed after being inited) • Error (duh) • Initializing (can be in this state for seconds) • InsufficientBandwidth (USB Bus controller contention) • Not Powered (5V power is good, 12V power problems) • NotReady (Some component is still initing)
  11. 11. DEVELOPING WITH KINECTWHEN
  12. 12. VIDEO FRAMES
  13. 13. SIMPLE VIDEOKinect can be used to simply get RGB FramesEnable the streamkinectSensor.ColorStream.Enable(ColorImageFormat.RgbResolution640x480Fps30);Start the Kinect -kinectSensor.Start();Set up your event handler (if doing events – if not you’ll likely drop frames) -kinectSensor.ColorFrameReady += newEventHandler<Microsoft.Kinect.ColorImageFrameReadyEventArgs>(kinect_VideoFrameReady);Then the handler –void kinect_VideoFrameReady(object sender,Microsoft.Kinect.ColorImageFrameReadyEventArgse) { using (ColorImageFrame image = e.OpenColorImageFrame()) { if (image != null) { if (colorPixelData == null) { colorPixelData = new byte[image.PixelDataLength]; } else { image.CopyPixelDataTo(colorPixelData); BitmapSource source = BitmapSource.Create(image.Width,image.Height, 96, 96, PixelFormats.Bgr32, null, colorPixelData, image.Width *image.BytesPerPixel); videoImage.Source = source;...
  14. 14. DEPTH FRAMES
  15. 15. DEPTH FRAMESAdditionally you can use Depth frames to give you more information about your environment.kinectSensor.DepthStream.Enable(DepthImageFormat.Resolution320x240Fps30);kinectSensor.Start();kinectSensor.DepthFrameReady += newEventHandler<Microsoft.Kinect.DepthImageFrameReadyEventArgs>(kinect_DepthImageFrameReady);void kinect_DepthImageFrameReady(object sender,Microsoft.Kinect.DepthImageFrameReadyEventArgs e) { using (DepthImageFrame imageFrame = e.OpenDepthImageFrame()) { if (depthPixelData == null) { depthPixelData = new short[imageFrame.PixelDataLength]; } if (imageFrame != null) { imageFrame.CopyPixelDataTo(this.depthPixelData);…But!int depth = depthData[x + width * y] >> DepthImageFrame.PlayerIndexBitmaskWidth;
  16. 16. SKELETON TRACKING
  17. 17. SKELETON TRACKINGAnd one of the most interesting items is skeleton tracking –(This should start looking familiar)kinectSensor.SkeletonStream.Enable();kinectSensor.Start();kinectSensor.SkeletonFrameReady += new EventHandler<Microsoft.Kinect.SkeletonFrameReadyEventArgs>(kinect_SkeletonFrameReady);void kinect_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e) { using (SkeletonFrame skeletonFrame = e.OpenSkeletonFrame()) { if (skeletonFrame != null) { if ((this.skeletonData == null) || (this.skeletonData.Length != skeletonFrame.SkeletonArrayLength)) { this.skeletonData = new Skeleton[skeletonFrame.SkeletonArrayLength]; } skeletonFrame.CopySkeletonDataTo(this.skeletonData); Skeleton thisSkeleton = null; foreach (Skeleton skeleton in this.skeletonData) { if ((SkeletonTrackingState.Tracked == skeleton.TrackingState) && (thisSkeleton == null)) { thisSkeleton = skeleton; } } if (thisSkeleton != null) { thisSkeleton.Position }
  18. 18. SKELETON TRACKING(CONT)Key items for Skeleton Tracking –SkeletonPoint – X, Y, ZJointType enumeration – AnkleLeft, AnkleRight, ElbowLeft, ElbowRight,FootLeft, FootRight, HandLeft, HandRight, Head, HipCenter,HipLeft,HipRight,KneeLeft,KneeRight, ShoulderCenter, ShoulderLeft,ShoulderRight,Spine, WristLeft, WristRightJointTrackingState enumeration – Inferred, NotTracked, TrackedThese together tell you not just where joints are, but how confident thesystem is. Even so – remember that these are always estimates – you’llneed to be prepared to handle jittery data.
  19. 19. SPEECHRECOGNITIONBuilding a grammar is very accessible and relatively painfree.(A bit more code than the others.)Key items – it takes up to 4 seconds for the recognizer to beready.Each recognition has a confidence level of 0-1.Results are text and match text you send to a GrammarBuilder (“yes”, “no”, “launch”)Multiple word commands are helpful for disambiguation, butchained commands work much better. The recognitionengine can have nested grammars to help you with this.
  20. 20. DEVELOPING WITH KINECTWHERE
  21. 21. LOCATION• Human Interface • Kinect skeleton tracking is optimized for full body imaging and waist to head height camera position. • Kinect 1.5 software update (est May 2012) is scheduled to support sitting skeleton tracking.• Speech Recognition • Depth or Skeleton tracking can enable recognition vectoring to increase confidence. • Confidence level can be misleading if grammar items are close together (false matching.) • If possible, use multiple word commands for validation. Build a command grammar and use it for error/confidence checks.
  22. 22. LOCATION (CONT)• Depth Frames • Sunlight/IR can affect results. • Items in depth frame have a shadow – be prepared for interactions in those areas. • Depth Frames are not required to be the same resolution as associated Video frames.• Environment • Kinect is surprisingly robust for environment • IR washout is the biggest factor • Camera angle biggest factor for Skeleton Tracking.
  23. 23. FURTHERINFORMATIONKinect for Windows information:http://www.KinectForWindows.comTeam blog:http://blogs.msdn.com/b/kinectforwindowsChannel 9http://channel9.msdn.com/Tags/kinectAnd of course, our projects pageshttp://ChaoticMoon.com/Labs !
  24. 24. Q&AOR CONTACT ME AT:@PWHEATPHILWHEAT.NET

×