Your SlideShare is downloading. ×
Bruno Capuano @elbruno<br />MVP – Visual Studio ALM<br />b.capuano@gmail.com<br />Avanade<br />www.elbruno.com<br />Kinect...
Una nueva forma de jugar, donde TU eres el mando<br />¿QuéesKinect?<br />Voice Recognition<br />Gesture Recognition<br />F...
Opción A:<br />¿Porqué Kinect?<br />
Opción TU:<br />¿Porqué Kinect?<br />
Dispositivo que combina una cámara RGB, un sensor de profundidad y un array de micrófonos<br />Cámara RBG para el reconoci...
¿Qué es Kinect?<br />①<br />③<br />②<br />
¿Qué es Kinect?<br />Source: iFixit<br />9<br />
3D Depth Sensors<br />①<br />③<br />¿Qué es Kinect?<br />
Invisible Infrared (IR) Dots <br />320x240<br />
RGB Camera<br />②<br />¿Qué es Kinect?<br />
¿Qué es Kinect?<br />IR laser projector<br />IR camera<br />RGB camera<br />Source: iFixit<br />13<br />
Se utiliza para el reconocimiento facial<br />El reconocimiento facial requiere una fase de “training” <br />Necesita una ...
Multi-array Microphone<br />¿Qué es Kinect?<br />
Sensores de sonido<br /><ul><li>4 channel multi-array microphone
Sincronizado con la consola para eliminar el sonido de los juegos</li></ul>16<br />
Motorized Tilt<br />¿Qué es Kinect?<br />
Software<br />Research<br />Testing<br />Data collection<br />Y la “cajanegra”<br />
Prime Sense Chip<br />Xbox Hardware Engineering mejorónotablemente la calidad y velocidadbasado en los diseños de Prime Se...
Projected IR pattern<br />20<br />Source: www.ros.org<br />
Depth computation<br />Source: http://j.mp/eXsCiE<br />21<br />
Depth map<br />Source: www.insidekinect.com<br />22<br />
30 HZ frame rate<br />57deg field-of-view<br />Salida de video en Kinect<br />8-bit VGA RGB640 x 480<br />11-bit monochrom...
XBox 360 Hardware<br /><ul><li>Triple Core PowerPC 970, 3.2GHz
Hyperthreaded, 2 threads/core
500 MHz ATI graphics card
DirectX 9.5
512 MB RAM
2005 performance envelope
Must handle
real-time vision AND
a modern game</li></ul>Source: http://www.pcper.com/article.php?aid=940&type=expert<br />24<br />
¿Cómo funciona Kinect? (I)<br />
1- ¿Cómo sabe Kinectlo que hago?<br />“Xbox?!”<br />“Let’s Play!”<br />
“Xbox?!”<br />“Let’s Play!”<br />2- ¿Cómo aprendió Kinecttodo esto?<br />
J. Shotton, J. Winn, C. Rother, A. Criminisi, TextonBoost: Joint Appearance, Shape and Context Modeling for Multi-Class Ob...
Amplio campo de acción<br />Pero poca “agilidad”<br />Y no es real-time<br />MS Research: Human Body Tracking<br />R Navar...
Necesitamos un body tracker con<br />All body motions…<br />Allagilities…<br />10x Real-time…<br />Formultipleplayers…<br ...
Paso 1: Recolección de información<br />El equipo visita diferentes ubicaciones y se dedica a filmar usuarios reales de Xb...
Overlay Training Data<br />
Identificar cada pixel asociado a una de las 32 partes del cuerpo humano<br />Crear un cluster con las posibles configurac...
Millones de imágenescomoreferncias-> millones de parámetros de clasificación<br />Very far from “embarrassingly parallel”<...
Programmers View<br />
¿Cómo funciona Kinect? (II)<br />
Architectura extensible<br />Expert 1<br />fuses the hypotheses<br />Arbiter<br />Expert 2<br />Expert 3<br />probabilisti...
Sensor<br />Mapa de <br />profundidad<br />Separación por jugador<br />basado en el fondo<br />Paso a paso para el reconoc...
Ejemplos<br />39<br />
Sin calibración<br /><ul><li>Sin pose para inicio/pausa
Sin calibración para el fondo
Sin calibración para el cuerpo</li></ul>Uso mínimo de la CPU<br />Independiente de la iluminación<br />Carta a los reyes m...
body size<br />hair<br />FOV<br />body type<br />clothes<br />angle<br />pets<br />furniture<br />Pruebas: The test matrix...
Preproceso<br /><ul><li>Identificar el suelo (groundplane)
Aislar el fondo (aislar un sofá)
Identificar los jugadores</li></ul>42<br />
Seguimiento de cabeza y manos<br />2 “Seguidores” (trackers)<br />Seguimiento de cuerpo<br />not exposed through SDK<br />...
El problema del seguimiento de cuerpo<br />Classifier<br />Input<br />Depth map<br />Output<br />Body parts<br />Runs on G...
Entrenando a Kinect<br />Comienza desde datos ground-truth<br />Alineados con partes del cuerpo<br />Es necesario entrenar...
Upcoming SlideShare
Loading in...5
×

2011 03 01 MindCamp - Kinect y C#

1,972

Published on

Published in: Technology
0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,972
On Slideshare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
0
Comments
0
Likes
3
Embeds 0
No embeds

No notes for slide
  • kinetic,&quot; which means to be in motion, and &quot;connect,&quot; which means it &quot;connects you to the friends and entertainment you loveNatural User InterfaceMaking Beginners Feel Like Experts
  • Play video if you have time and if people have not seen Kinect in action
  • Color VGA video camera - This video camera aids in facial recognition and other detection features by detecting three color components: red, green and blue. Microsoft calls this an &quot;RGB camera&quot; referring to the color components it detects.Depth sensor - An infrared projector and a monochrome CMOS (complimentary metal-oxide semiconductor) sensor work together to &quot;see&quot; the room in 3-D regardless of the lighting conditions. Complementary metal–oxide–semiconductor (CMOS) (pronounced /ˈsiːmɒs/) is a technology for constructing integrated circuits. CMOS technology is used in microprocessors, microcontrollers, static RAM, and other digital logic circuits. CMOS technology is also used for several analog circuits such as image sensors, data converters, and highly integrated transceivers for many types of communicationMulti-array microphone - This is an array of four microphones that can isolate the voices of the players from the noise in the room. This allows the player to be a few feet away from the microphone and still use voice controls.What comes in the boxKinect sensor for Xbox 360Power supply cableUser&apos;s manualWi-Fi extension cableKinect Adventures gameColor VGA Motion Camera 640 x 480 pixel resolution at 30FPSDepth Camera 640 x 480 pixel resolution at 30FPSArray of 4 microphones supporting single speaker voice recognitionKinect&apos;s software layer is the essential component to add meaning to what the hardware detects. When you first start up Kinect, it reads the layout of your room and configures the play space you&apos;ll be moving in. Then, Kinect detects and tracks 32 points on each player&apos;s body, mapping them to a digital reproduction of that player&apos;s body shape and skeletal structure, including facial details.http://electronics.howstuffworks.com/microsoft-kinect3.htmhttp://www.popsci.com/gadgets/article/2010-01/exclusive-inside-microsofts-project-natalKinect Software Learns from &quot;Experience&quot;Kinect&apos;s software layer is the essential component to add meaning to what the hardware detects. When you first start up Kinect, it reads the layout of your room and configures the play space you&apos;ll be moving in. Then, Kinect detects and tracks 48 points on each player&apos;s body, mapping them to a digital reproduction of that player&apos;s body shape and skeletal structure, including facial details [source: Rule].In an interview with Scientific American, Alex Kipman, Microsoft&apos;s Director of Incubation for Xbox 360, explains Project Natal&apos;s approach to developing the Kinect software. Kipman explains, &quot;Every single motion of the body is an input,&quot; which creates seemingly endless combinations of actions [source: Kuchinskas]. Knowing this, developers decided not to program that seemingly endless combination into pre-established actions and reactions in the software. Instead, it would &quot;teach&quot; the system how to react based on how humans learn: by classifying the gestures of people in the real world.To start the teaching process, Kinect developers gathered massive amounts of data from motion-capture in real-life scenarios. Then, they processed that data using a machine-learning algorithm by Jamie Shotton, a researcher at Microsoft Research Cambridge in England. Ultimately, the developers were able to map the data to models representing people of different ages, body types, genders and clothing. With select data, developers were able to teach the system to classify the skeletal movements of each model, emphasizing the joints and distances between those joints. An article in Popular Science describes the four steps Kinect&apos;s &quot;brain&quot; goes through 30 times per second to read and respond to your movements [source: Duffy].The Kinect software goes a step further than just detecting and reacting to what it can &quot;see.&quot; Kinect can also distinguish players and their movements even if they&apos;re partially hidden. Kinect extrapolates what the rest of your body is doing as long as it can detect some parts of it. This allows players to jump in front of each other during a game or to stand behind pieces of furniture in the room.
  • Depth sensor. An infrared projector combined with a monochrome CMOS sensor allows Kinect to see the room in 3-D (as opposed to inferring the room from a 2-D image) under any lighting conditions.
  • a 320×240 depth stream. Depth is recovered by projecting invisible infrared (IR) dots into a room. The way the optical system works, on a hardware level, is fairly basic. A class 1 laser is projected into the room. The sensor is able to detect what&apos;s going on based on what&apos;s reflected back at it. Together, the projector and sensor create a depth map. The regular old video camera is held at a specific distance away from the 3D part of the optical system in a precise alignment, so that Kinect can blend together the depth map and RGB picture for dynamic, on-the-fly green screening.
  • RGB camera. Kinect has a video camera that delivers the three basic color components. As part of the Kinect sensor, the RGB camera helps enable facial recognition and more.
  • Four different microphones allow Kinect to figure out where the sound is coming from
  • Multiarray microphone. Kinect has a microphone that is able to locate voices by sound and extract ambient noise. The multiarray microphone enables headset-free Xbox LIVE party chat and more.
  • Microsoft software. A proprietary software layer makes the magic of Kinect possible. This layer differentiates Kinect from any other technology on the market through its ability to enable human body recognition and extract other visual noise.
  • Micron scale tolerances on large componentsManufacturing process to yield ~1 device / 1.5 seconds
  • http://research.microsoft.com/apps/video/default.aspx?id=139295
  • http://research.microsoft.com/apps/video/default.aspx?id=139295
  • http://research.microsoft.com/en-us/projects/DryadLINQ/DryadLINQ is a simple, powerful, and elegant programming environment for writing large-scale data parallel applications running on large PC clusters.
  • Transcript of "2011 03 01 MindCamp - Kinect y C#"

    1. 1.
    2. 2. Bruno Capuano @elbruno<br />MVP – Visual Studio ALM<br />b.capuano@gmail.com<br />Avanade<br />www.elbruno.com<br />Kinect y C#Otra forma de conquistar el mundo …<br />
    3. 3. Una nueva forma de jugar, donde TU eres el mando<br />¿QuéesKinect?<br />Voice Recognition<br />Gesture Recognition<br />Face Recognition<br />You Recognition<br />
    4. 4. Opción A:<br />¿Porqué Kinect?<br />
    5. 5. Opción TU:<br />¿Porqué Kinect?<br />
    6. 6.
    7. 7. Dispositivo que combina una cámara RGB, un sensor de profundidad y un array de micrófonos<br />Cámara RBG para el reconocimiento de los tres colores básicos<br />Sensor de Profundidad que permite “ver una habitación en 3D”<br />El array de micrófonos detecta las voces y las aisla del ruido ambiental<br />Caja negra de software que une todo y hace toda la magia<br />¿Qué es Kinect?<br />
    8. 8. ¿Qué es Kinect?<br />①<br />③<br />②<br />
    9. 9. ¿Qué es Kinect?<br />Source: iFixit<br />9<br />
    10. 10. 3D Depth Sensors<br />①<br />③<br />¿Qué es Kinect?<br />
    11. 11. Invisible Infrared (IR) Dots <br />320x240<br />
    12. 12. RGB Camera<br />②<br />¿Qué es Kinect?<br />
    13. 13. ¿Qué es Kinect?<br />IR laser projector<br />IR camera<br />RGB camera<br />Source: iFixit<br />13<br />
    14. 14. Se utiliza para el reconocimiento facial<br />El reconocimiento facial requiere una fase de “training” <br />Necesita una buena iluminación<br />RGB Camera<br />14<br />
    15. 15. Multi-array Microphone<br />¿Qué es Kinect?<br />
    16. 16. Sensores de sonido<br /><ul><li>4 channel multi-array microphone
    17. 17. Sincronizado con la consola para eliminar el sonido de los juegos</li></ul>16<br />
    18. 18. Motorized Tilt<br />¿Qué es Kinect?<br />
    19. 19. Software<br />Research<br />Testing<br />Data collection<br />Y la “cajanegra”<br />
    20. 20. Prime Sense Chip<br />Xbox Hardware Engineering mejorónotablemente la calidad y velocidadbasado en los diseños de Prime Sense<br />19<br />
    21. 21. Projected IR pattern<br />20<br />Source: www.ros.org<br />
    22. 22. Depth computation<br />Source: http://j.mp/eXsCiE<br />21<br />
    23. 23. Depth map<br />Source: www.insidekinect.com<br />22<br />
    24. 24. 30 HZ frame rate<br />57deg field-of-view<br />Salida de video en Kinect<br />8-bit VGA RGB640 x 480<br />11-bit monochrome320 x 240<br />23<br />
    25. 25. XBox 360 Hardware<br /><ul><li>Triple Core PowerPC 970, 3.2GHz
    26. 26. Hyperthreaded, 2 threads/core
    27. 27. 500 MHz ATI graphics card
    28. 28. DirectX 9.5
    29. 29. 512 MB RAM
    30. 30. 2005 performance envelope
    31. 31. Must handle
    32. 32. real-time vision AND
    33. 33. a modern game</li></ul>Source: http://www.pcper.com/article.php?aid=940&type=expert<br />24<br />
    34. 34. ¿Cómo funciona Kinect? (I)<br />
    35. 35. 1- ¿Cómo sabe Kinectlo que hago?<br />“Xbox?!”<br />“Let’s Play!”<br />
    36. 36. “Xbox?!”<br />“Let’s Play!”<br />2- ¿Cómo aprendió Kinecttodo esto?<br />
    37. 37. J. Shotton, J. Winn, C. Rother, A. Criminisi, TextonBoost: Joint Appearance, Shape and Context Modeling for Multi-Class Object Recognition and Segmentation. European Conference on Computer Vision, 2006<br />MSResearch: Reconocimiento de Objetos<br />
    38. 38. Amplio campo de acción<br />Pero poca “agilidad”<br />Y no es real-time<br />MS Research: Human Body Tracking<br />R Navaratnam, A Fitzgibbon, R Cipolla The Joint Manifold Model for <br />Semi-supervised Multi-valued RegressionIEEE Intl Conf on Computer Vision, 2007<br />
    39. 39. Necesitamos un body tracker con<br />All body motions…<br />Allagilities…<br />10x Real-time…<br />Formultipleplayers…<br />… and it has to be 3D <br />XBOX llama a MSR: Septiembre 2008<br />
    40. 40. Paso 1: Recolección de información<br />El equipo visita diferentes ubicaciones y se dedica a filmar usuarios reales de Xbox<br />Hollywood motion capture studiogeneratesbillions of CG images<br />MSR & xBox: Machine Learning<br />
    41. 41. Overlay Training Data<br />
    42. 42. Identificar cada pixel asociado a una de las 32 partes del cuerpo humano<br />Crear un cluster con las posibles configuraciones de “partes” que coincidan con las articulaciones<br />Presentar la probabilidad más acercada a la realidad al usuario<br />t=1<br />t=2<br />t=3<br />Indenticando el cuerpo<br />
    43. 43. Millones de imágenescomoreferncias-> millones de parámetros de clasificación<br />Very far from “embarrassingly parallel”<br />Nuevo algoritomopara resolver árboles de decisióndistribuidos<br />Utilizaciónmasiva de DryadLINQ<br />Disponibleparadescargar<br />Training<br />Distributed Data-Parallel Computing Using a High-Level Programming Language<br />M Isard, Y Yu<br />International Conference on Management of Data (SIGMOD), July 2009<br />
    44. 44. Programmers View<br />
    45. 45. ¿Cómo funciona Kinect? (II)<br />
    46. 46. Architectura extensible<br />Expert 1<br />fuses the hypotheses<br />Arbiter<br />Expert 2<br />Expert 3<br />probabilistic<br />Final<br />estimate<br />Raw<br />data<br />Skeleton<br />estimates<br />Sensor<br />Stateless<br />Statefull<br />37<br />
    47. 47. Sensor<br />Mapa de <br />profundidad<br />Separación por jugador<br />basado en el fondo<br />Paso a paso para el reconocimiento<br />38<br />Clasificación de <br />partes del cuerpo<br />Identificación de <br />“joints”<br />Creación de<br />“Skeleton”<br />
    48. 48. Ejemplos<br />39<br />
    49. 49. Sin calibración<br /><ul><li>Sin pose para inicio/pausa
    50. 50. Sin calibración para el fondo
    51. 51. Sin calibración para el cuerpo</li></ul>Uso mínimo de la CPU<br />Independiente de la iluminación<br />Carta a los reyes manos<br />40<br />
    52. 52. body size<br />hair<br />FOV<br />body type<br />clothes<br />angle<br />pets<br />furniture<br />Pruebas: The test matrix<br />41<br />
    53. 53. Preproceso<br /><ul><li>Identificar el suelo (groundplane)
    54. 54. Aislar el fondo (aislar un sofá)
    55. 55. Identificar los jugadores</li></ul>42<br />
    56. 56. Seguimiento de cabeza y manos<br />2 “Seguidores” (trackers)<br />Seguimiento de cuerpo<br />not exposed through SDK<br />43<br />
    57. 57. El problema del seguimiento de cuerpo<br />Classifier<br />Input<br />Depth map<br />Output<br />Body parts<br />Runs on GPU @ 320x240<br />44<br />
    58. 58. Entrenando a Kinect<br />Comienza desde datos ground-truth<br />Alineados con partes del cuerpo<br />Es necesario entrenar a Kinect para trabajar con<br />Poses<br />Posición por escena<br />Tamaño y formas del cuerpo<br />45<br />
    59. 59. Entrenando a Kinect<br />Use synthetic data (3D avatar model)<br /><ul><li>Inject noise</li></ul>46<br />
    60. 60. Motion Capture: <br /><ul><li>Unrealistic environments
    61. 61. Unrealistic clothing
    62. 62. Low throughput</li></ul>Entrenando a Kinect<br />47<br />
    63. 63. Entrenando a Kinect<br />Manual Tagging:<br /><ul><li>Requires training many people
    64. 64. Potentially expensive
    65. 65. Tagging tool influences biases in data.
    66. 66. Quality control is an issue
    67. 67. 1000 hrs @ 20 contractors ~= 20 years</li></ul>48<br />
    68. 68. Entrenando a Kinect<br />Amazon Mechanical Turk:<br /><ul><li>Build web based tool
    69. 69. Tagging tool is 2D only
    70. 70. Quality control can be done with redundant HITS
    71. 71. 2000 frames/hr @ $0.04/HIT -> 6 yrs @ $80/hr</li></ul>49<br />
    72. 72. Compute P(ci|wi)<br />pixels i = (x, y)<br />body part ci<br />image window wi<br />Learn classifier P(ci|wi) from training data<br />randomized decision forests<br />Clasificando pixel a pixel<br />example image windows<br />window moves with classifier<br />50<br />
    73. 73. Features<br />𝑓𝜃𝐼, 𝑥=𝑑𝐼𝑥+𝑢𝑑𝐼𝑥-𝑑𝐼𝑥+𝑣𝑑𝐼𝑥<br /> <br />𝑑𝐼𝑥<br /> <br />-- depth of pixel x in image I<br />-- parameter describing offetsu and v<br />𝜃 = (u,v)<br /> <br />51<br />
    74. 74. Analizalasposiciones 3D del todaslaspartesidentificadas del cuerpo<br />Genera unacolección (posicion, confidence)/parte<br />Genera múltiplesopcionesparacada parte del cuerpo<br />El trabajo lo realiza la GPU<br />Paso 1: «Body» a «Joint Positions»<br />52<br />
    75. 75. Basado en 3 modelos de “Skeleton“ <br />El proceso se realiza en:<br />Cálculo de distancia entre puntos conectados(relativos al «tamaño del cuerpo»)<br />Cercanía de los huesos con las partes del cuerpo<br />Aplica además patrones para el «smoothness»<br />Paso 2: «Joint Positions» a «Skeleton»<br />53<br />
    76. 76. Como definir el “skeleton”?<br />54<br />
    77. 77. DEMO<br />
    78. 78. GRACIAS<br />

    ×