2011 03 01 MindCamp - Kinect y C#


Published on

Published in: Technology
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • kinetic," which means to be in motion, and "connect," which means it "connects you to the friends and entertainment you loveNatural User InterfaceMaking Beginners Feel Like Experts
  • Play video if you have time and if people have not seen Kinect in action
  • Color VGA video camera - This video camera aids in facial recognition and other detection features by detecting three color components: red, green and blue. Microsoft calls this an "RGB camera" referring to the color components it detects.Depth sensor - An infrared projector and a monochrome CMOS (complimentary metal-oxide semiconductor) sensor work together to "see" the room in 3-D regardless of the lighting conditions. Complementary metal–oxide–semiconductor (CMOS) (pronounced /ˈsiːmɒs/) is a technology for constructing integrated circuits. CMOS technology is used in microprocessors, microcontrollers, static RAM, and other digital logic circuits. CMOS technology is also used for several analog circuits such as image sensors, data converters, and highly integrated transceivers for many types of communicationMulti-array microphone - This is an array of four microphones that can isolate the voices of the players from the noise in the room. This allows the player to be a few feet away from the microphone and still use voice controls.What comes in the boxKinect sensor for Xbox 360Power supply cableUser's manualWi-Fi extension cableKinect Adventures gameColor VGA Motion Camera 640 x 480 pixel resolution at 30FPSDepth Camera 640 x 480 pixel resolution at 30FPSArray of 4 microphones supporting single speaker voice recognitionKinect's software layer is the essential component to add meaning to what the hardware detects. When you first start up Kinect, it reads the layout of your room and configures the play space you'll be moving in. Then, Kinect detects and tracks 32 points on each player's body, mapping them to a digital reproduction of that player's body shape and skeletal structure, including facial details.http://electronics.howstuffworks.com/microsoft-kinect3.htmhttp://www.popsci.com/gadgets/article/2010-01/exclusive-inside-microsofts-project-natalKinect Software Learns from "Experience"Kinect's software layer is the essential component to add meaning to what the hardware detects. When you first start up Kinect, it reads the layout of your room and configures the play space you'll be moving in. Then, Kinect detects and tracks 48 points on each player's body, mapping them to a digital reproduction of that player's body shape and skeletal structure, including facial details [source: Rule].In an interview with Scientific American, Alex Kipman, Microsoft's Director of Incubation for Xbox 360, explains Project Natal's approach to developing the Kinect software. Kipman explains, "Every single motion of the body is an input," which creates seemingly endless combinations of actions [source: Kuchinskas]. Knowing this, developers decided not to program that seemingly endless combination into pre-established actions and reactions in the software. Instead, it would "teach" the system how to react based on how humans learn: by classifying the gestures of people in the real world.To start the teaching process, Kinect developers gathered massive amounts of data from motion-capture in real-life scenarios. Then, they processed that data using a machine-learning algorithm by Jamie Shotton, a researcher at Microsoft Research Cambridge in England. Ultimately, the developers were able to map the data to models representing people of different ages, body types, genders and clothing. With select data, developers were able to teach the system to classify the skeletal movements of each model, emphasizing the joints and distances between those joints. An article in Popular Science describes the four steps Kinect's "brain" goes through 30 times per second to read and respond to your movements [source: Duffy].The Kinect software goes a step further than just detecting and reacting to what it can "see." Kinect can also distinguish players and their movements even if they're partially hidden. Kinect extrapolates what the rest of your body is doing as long as it can detect some parts of it. This allows players to jump in front of each other during a game or to stand behind pieces of furniture in the room.
  • Depth sensor. An infrared projector combined with a monochrome CMOS sensor allows Kinect to see the room in 3-D (as opposed to inferring the room from a 2-D image) under any lighting conditions.
  • a 320×240 depth stream. Depth is recovered by projecting invisible infrared (IR) dots into a room. The way the optical system works, on a hardware level, is fairly basic. A class 1 laser is projected into the room. The sensor is able to detect what's going on based on what's reflected back at it. Together, the projector and sensor create a depth map. The regular old video camera is held at a specific distance away from the 3D part of the optical system in a precise alignment, so that Kinect can blend together the depth map and RGB picture for dynamic, on-the-fly green screening.
  • RGB camera. Kinect has a video camera that delivers the three basic color components. As part of the Kinect sensor, the RGB camera helps enable facial recognition and more.
  • Four different microphones allow Kinect to figure out where the sound is coming from
  • Multiarray microphone. Kinect has a microphone that is able to locate voices by sound and extract ambient noise. The multiarray microphone enables headset-free Xbox LIVE party chat and more.
  • Microsoft software. A proprietary software layer makes the magic of Kinect possible. This layer differentiates Kinect from any other technology on the market through its ability to enable human body recognition and extract other visual noise.
  • Micron scale tolerances on large componentsManufacturing process to yield ~1 device / 1.5 seconds
  • http://research.microsoft.com/apps/video/default.aspx?id=139295
  • http://research.microsoft.com/apps/video/default.aspx?id=139295
  • http://research.microsoft.com/en-us/projects/DryadLINQ/DryadLINQ is a simple, powerful, and elegant programming environment for writing large-scale data parallel applications running on large PC clusters.
  • 2011 03 01 MindCamp - Kinect y C#

    1. 1.
    2. 2. Bruno Capuano @elbruno<br />MVP – Visual Studio ALM<br />b.capuano@gmail.com<br />Avanade<br />www.elbruno.com<br />Kinect y C#Otra forma de conquistar el mundo …<br />
    3. 3. Una nueva forma de jugar, donde TU eres el mando<br />¿QuéesKinect?<br />Voice Recognition<br />Gesture Recognition<br />Face Recognition<br />You Recognition<br />
    4. 4. Opción A:<br />¿Porqué Kinect?<br />
    5. 5. Opción TU:<br />¿Porqué Kinect?<br />
    6. 6.
    7. 7. Dispositivo que combina una cámara RGB, un sensor de profundidad y un array de micrófonos<br />Cámara RBG para el reconocimiento de los tres colores básicos<br />Sensor de Profundidad que permite “ver una habitación en 3D”<br />El array de micrófonos detecta las voces y las aisla del ruido ambiental<br />Caja negra de software que une todo y hace toda la magia<br />¿Qué es Kinect?<br />
    8. 8. ¿Qué es Kinect?<br />①<br />③<br />②<br />
    9. 9. ¿Qué es Kinect?<br />Source: iFixit<br />9<br />
    10. 10. 3D Depth Sensors<br />①<br />③<br />¿Qué es Kinect?<br />
    11. 11. Invisible Infrared (IR) Dots <br />320x240<br />
    12. 12. RGB Camera<br />②<br />¿Qué es Kinect?<br />
    13. 13. ¿Qué es Kinect?<br />IR laser projector<br />IR camera<br />RGB camera<br />Source: iFixit<br />13<br />
    14. 14. Se utiliza para el reconocimiento facial<br />El reconocimiento facial requiere una fase de “training” <br />Necesita una buena iluminación<br />RGB Camera<br />14<br />
    15. 15. Multi-array Microphone<br />¿Qué es Kinect?<br />
    16. 16. Sensores de sonido<br /><ul><li>4 channel multi-array microphone
    17. 17. Sincronizado con la consola para eliminar el sonido de los juegos</li></ul>16<br />
    18. 18. Motorized Tilt<br />¿Qué es Kinect?<br />
    19. 19. Software<br />Research<br />Testing<br />Data collection<br />Y la “cajanegra”<br />
    20. 20. Prime Sense Chip<br />Xbox Hardware Engineering mejorónotablemente la calidad y velocidadbasado en los diseños de Prime Sense<br />19<br />
    21. 21. Projected IR pattern<br />20<br />Source: www.ros.org<br />
    22. 22. Depth computation<br />Source: http://j.mp/eXsCiE<br />21<br />
    23. 23. Depth map<br />Source: www.insidekinect.com<br />22<br />
    24. 24. 30 HZ frame rate<br />57deg field-of-view<br />Salida de video en Kinect<br />8-bit VGA RGB640 x 480<br />11-bit monochrome320 x 240<br />23<br />
    25. 25. XBox 360 Hardware<br /><ul><li>Triple Core PowerPC 970, 3.2GHz
    26. 26. Hyperthreaded, 2 threads/core
    27. 27. 500 MHz ATI graphics card
    28. 28. DirectX 9.5
    29. 29. 512 MB RAM
    30. 30. 2005 performance envelope
    31. 31. Must handle
    32. 32. real-time vision AND
    33. 33. a modern game</li></ul>Source: http://www.pcper.com/article.php?aid=940&type=expert<br />24<br />
    34. 34. ¿Cómo funciona Kinect? (I)<br />
    35. 35. 1- ¿Cómo sabe Kinectlo que hago?<br />“Xbox?!”<br />“Let’s Play!”<br />
    36. 36. “Xbox?!”<br />“Let’s Play!”<br />2- ¿Cómo aprendió Kinecttodo esto?<br />
    37. 37. J. Shotton, J. Winn, C. Rother, A. Criminisi, TextonBoost: Joint Appearance, Shape and Context Modeling for Multi-Class Object Recognition and Segmentation. European Conference on Computer Vision, 2006<br />MSResearch: Reconocimiento de Objetos<br />
    38. 38. Amplio campo de acción<br />Pero poca “agilidad”<br />Y no es real-time<br />MS Research: Human Body Tracking<br />R Navaratnam, A Fitzgibbon, R Cipolla The Joint Manifold Model for <br />Semi-supervised Multi-valued RegressionIEEE Intl Conf on Computer Vision, 2007<br />
    39. 39. Necesitamos un body tracker con<br />All body motions…<br />Allagilities…<br />10x Real-time…<br />Formultipleplayers…<br />… and it has to be 3D <br />XBOX llama a MSR: Septiembre 2008<br />
    40. 40. Paso 1: Recolección de información<br />El equipo visita diferentes ubicaciones y se dedica a filmar usuarios reales de Xbox<br />Hollywood motion capture studiogeneratesbillions of CG images<br />MSR & xBox: Machine Learning<br />
    41. 41. Overlay Training Data<br />
    42. 42. Identificar cada pixel asociado a una de las 32 partes del cuerpo humano<br />Crear un cluster con las posibles configuraciones de “partes” que coincidan con las articulaciones<br />Presentar la probabilidad más acercada a la realidad al usuario<br />t=1<br />t=2<br />t=3<br />Indenticando el cuerpo<br />
    43. 43. Millones de imágenescomoreferncias-> millones de parámetros de clasificación<br />Very far from “embarrassingly parallel”<br />Nuevo algoritomopara resolver árboles de decisióndistribuidos<br />Utilizaciónmasiva de DryadLINQ<br />Disponibleparadescargar<br />Training<br />Distributed Data-Parallel Computing Using a High-Level Programming Language<br />M Isard, Y Yu<br />International Conference on Management of Data (SIGMOD), July 2009<br />
    44. 44. Programmers View<br />
    45. 45. ¿Cómo funciona Kinect? (II)<br />
    46. 46. Architectura extensible<br />Expert 1<br />fuses the hypotheses<br />Arbiter<br />Expert 2<br />Expert 3<br />probabilistic<br />Final<br />estimate<br />Raw<br />data<br />Skeleton<br />estimates<br />Sensor<br />Stateless<br />Statefull<br />37<br />
    47. 47. Sensor<br />Mapa de <br />profundidad<br />Separación por jugador<br />basado en el fondo<br />Paso a paso para el reconocimiento<br />38<br />Clasificación de <br />partes del cuerpo<br />Identificación de <br />“joints”<br />Creación de<br />“Skeleton”<br />
    48. 48. Ejemplos<br />39<br />
    49. 49. Sin calibración<br /><ul><li>Sin pose para inicio/pausa
    50. 50. Sin calibración para el fondo
    51. 51. Sin calibración para el cuerpo</li></ul>Uso mínimo de la CPU<br />Independiente de la iluminación<br />Carta a los reyes manos<br />40<br />
    52. 52. body size<br />hair<br />FOV<br />body type<br />clothes<br />angle<br />pets<br />furniture<br />Pruebas: The test matrix<br />41<br />
    53. 53. Preproceso<br /><ul><li>Identificar el suelo (groundplane)
    54. 54. Aislar el fondo (aislar un sofá)
    55. 55. Identificar los jugadores</li></ul>42<br />
    56. 56. Seguimiento de cabeza y manos<br />2 “Seguidores” (trackers)<br />Seguimiento de cuerpo<br />not exposed through SDK<br />43<br />
    57. 57. El problema del seguimiento de cuerpo<br />Classifier<br />Input<br />Depth map<br />Output<br />Body parts<br />Runs on GPU @ 320x240<br />44<br />
    58. 58. Entrenando a Kinect<br />Comienza desde datos ground-truth<br />Alineados con partes del cuerpo<br />Es necesario entrenar a Kinect para trabajar con<br />Poses<br />Posición por escena<br />Tamaño y formas del cuerpo<br />45<br />
    59. 59. Entrenando a Kinect<br />Use synthetic data (3D avatar model)<br /><ul><li>Inject noise</li></ul>46<br />
    60. 60. Motion Capture: <br /><ul><li>Unrealistic environments
    61. 61. Unrealistic clothing
    62. 62. Low throughput</li></ul>Entrenando a Kinect<br />47<br />
    63. 63. Entrenando a Kinect<br />Manual Tagging:<br /><ul><li>Requires training many people
    64. 64. Potentially expensive
    65. 65. Tagging tool influences biases in data.
    66. 66. Quality control is an issue
    67. 67. 1000 hrs @ 20 contractors ~= 20 years</li></ul>48<br />
    68. 68. Entrenando a Kinect<br />Amazon Mechanical Turk:<br /><ul><li>Build web based tool
    69. 69. Tagging tool is 2D only
    70. 70. Quality control can be done with redundant HITS
    71. 71. 2000 frames/hr @ $0.04/HIT -> 6 yrs @ $80/hr</li></ul>49<br />
    72. 72. Compute P(ci|wi)<br />pixels i = (x, y)<br />body part ci<br />image window wi<br />Learn classifier P(ci|wi) from training data<br />randomized decision forests<br />Clasificando pixel a pixel<br />example image windows<br />window moves with classifier<br />50<br />
    73. 73. Features<br />𝑓𝜃𝐼, 𝑥=𝑑𝐼𝑥+𝑢𝑑𝐼𝑥-𝑑𝐼𝑥+𝑣𝑑𝐼𝑥<br /> <br />𝑑𝐼𝑥<br /> <br />-- depth of pixel x in image I<br />-- parameter describing offetsu and v<br />𝜃 = (u,v)<br /> <br />51<br />
    74. 74. Analizalasposiciones 3D del todaslaspartesidentificadas del cuerpo<br />Genera unacolección (posicion, confidence)/parte<br />Genera múltiplesopcionesparacada parte del cuerpo<br />El trabajo lo realiza la GPU<br />Paso 1: «Body» a «Joint Positions»<br />52<br />
    75. 75. Basado en 3 modelos de “Skeleton“ <br />El proceso se realiza en:<br />Cálculo de distancia entre puntos conectados(relativos al «tamaño del cuerpo»)<br />Cercanía de los huesos con las partes del cuerpo<br />Aplica además patrones para el «smoothness»<br />Paso 2: «Joint Positions» a «Skeleton»<br />53<br />
    76. 76. Como definir el “skeleton”?<br />54<br />
    77. 77. DEMO<br />
    78. 78. GRACIAS<br />