Vision pk1

435 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
435
On SlideShare
0
From Embeds
0
Number of Embeds
13
Actions
Shares
0
Downloads
2
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Good afternoon, Hi,I’m Francesca Mereu, and I ‘m an Architect, artist and digital art researcher., coordinating VLAB 4D (viulab fordi), an advanced visualization experimental group at Medialab-Prado, focused on projects that links art and technology: Hologramms, 3-D spaces, inmersive reality I’m the creator of M-Artech Platform (Women in art and technology), too I’m a member of MAV, (mujeres en lasartesvisuales), the Spanish association of Women in Visual Arts.
  • I’m presenting the VISIONS ProjectA volumetric device that allows 3-D pseudo hologram Objects are projected onto a pyramidal structure with a LCD monitor, and due to the special geometry of the structure, it looks like real 3-D objects floating inside the pyramid for any observer from any direction.
  • Perspecta uses a screen vertical instead of a helical, where images are projected at high speed. The eye merges these images to create a 3D image without seams. Possibly the most advanced of such systems provides a resolution of over 100 million voxels. It consists of a spherical transparent dome gives it a characteristic of "crystal ball" inside a flat screen which rotates at 730 rpm ( seventy, thirstyerre, pi, em). a projector lights on the screen with up to 198 images of 768x768 pixels each, showing one or the other depending on the angle of rotation of the screen.RealFiction is Danish company composed by Peter Simonsen and ClasDyrholm both have educations and professional experience from film, TV and radio production. In 2002 they ventured out on a new journey, inspired by an idea about breaking the frame and forsaking the traditional film and TV-formats, in order to tell stories with video holographic illusions, where fiction and reality merges into a new and powerful world of mixed-reality.
  • The device uses a Kinect camera, to capture the outward movement of people and allow interaction with the user-object in real time.
  • The objects are projected onto a pyramidal structure with an LCD monitor, and due to the special geometry of the structure, it looks like real 3-D objects that float inside the pyramid for any observer from any direction without glasses.
  • 3D Communication software for Design and Engineering Professionals.
  • Makerbot machine is an affordable, open source 3D printer. It's a machine that can make things. We Used the ABS plastic which is the same thing Lego is made out of
  • Processing is an open source programming language and environment for people who want to program images, animation, and interactions. It is an open project initiated by Ben Fry and Casey Reas.
  • The Kinect sensor is a horizontal bar connected to a small base with a motorized pivot and is designed to be positioned lengthwise above or below the video display. The device features an "RGB camera, depth sensor and multi-array microphone running proprietary software", which provide full-body 3D motion capture, facial recognition and voice recognition capabilities.The depth sensor consists of an infrared laser projector combined with a monochrome CMOS sensor, which captures video data in 3D under any ambient light conditions. The sensing range of the depth sensor is adjustable, and the Kinect software is capable of automatically calibrating the sensor based on gameplay and the player's physical environment, accommodating for the presence of furniture or other obstacles
  • Processing is an open source programming language and environment for people who want to program images, animation, and interactions. It is an open project initiated by Ben Fry and Casey Reas.
  • Easy to installMultiplataformEasy to useDIsadvantagesNo recognitionNo Alignment
  • The PCL framework contains numerous state-of-the art algorithms including filtering, feature estimation, surface reconstruction, registration, model fitting and segmentation, as well as higher level tools for performing mapping and object recognition. Think of it as the Boost of 3D point cloud processing.PCL is released under the terms of the BSD license. It is free for commercial and research use.
  • This structure uses a Kinect camera, in order to capture the external movement of the people and permit the interaction user-object in real time.The user can move the 3-D object with a illusion to “touch” the projected imageBasically, the people are placed around the structure to observe the imageHaptics: haptic Greek: touchPeople have to be positioned a distance of from 0.50 to 100 cm Camerato capture the images. The image can be viewed from all sides EN360 º. There are two types of user interaction: visual and haptic visual. Through estaúltima experienced the illusion that we can "touch" the object 3D.La visual interaction is related to the concept of telepresence as laspopulares images in the Star Wars movies.
  • Vision pk1

    1. 1. Visions Project K.1.0 A DIY interactive 3-D device by Francesca Mereu and Javier Villarroel(VLAB4D Visualization Advanced Group) http://visionlab4d.com.ar
    2. 2. Visions Project K.The Visions Project K. is a volumetric device that allowspseudo visualization of 3-D objects and their interactionwith the user.
    3. 3. Visions Project K.VISIONS Poject K. 1.0 (Interactive video hologram ), is research project on thevideo holograms and interactive 3D devices, which was presented at the Medialab- PradoCenter for Digital Art Culture in Madrid in February 2012. TO RESEARCH ON 3-D DISPLAY DEVICE AND DEVELOP A LOW COST PROTOTYPE, EASY TO ASSEMBLE, THAT USE A OPEN SOURCE AIMS: SOFTWARE; TO VISUALIZE 3-D IMAGE AS VIDEOHOLOGRAMS; TO EXPERIMENT USER INTERACTION as EXPERIENCE WITH 3-D IMAGE FLOATING IN THE AIR: HAPTIC INTERACTION AND VISUAL INTERACTION.
    4. 4. Firstprototype: Medialab-Prado 2009
    5. 5. Visions Project K.How visualize 3-D imagesObjects are projected onto a pyramidal structure with a LCD monitor, and dueto the special geometry of the structure, it looks like real 3-D objects floatinginside the pyramid for any observer from any direction.
    6. 6. Visions Project K.Optical System
    7. 7. Visions Project K. How does it work?The device use a Kinect camera tocapture the interaction; a LCD monitor to projecting the 3-Dimage; a PC to process and control thedevice; a pyramidal structure where theimage is projected; a external structure to support themonitor and protect the pyramid; There are 2 types of interaction user: visual and haptica
    8. 8. Visions Project K.DIYa digital fabrication
    9. 9. Visions Project K.• This project has been conceived like a low cost DIY portable device;• The structure is light and easy of build and dismantle;• all the information necessary to create, build and how to make that work isshare by a Creative Commons License
    10. 10. Visions Project K.OPEN DESIGN: SketchupThe materials to build theentire device are:8 plastic tubes (45 cm).4 plastic tubes (20 cm).4 plastic triangles to build thepyramid4 corners to support thepyramid and the structure.4 corners to support themonitor.
    11. 11. Visions Project K. DIY DIGITAL FABRICATIONYou can download Sketchup 3-Dmodel The elements of the structure arehttp://sketchup.google.com/3dwarehouse/d printing by a Makerbot machineetails?mid=f20afc7c51f36aace20f58bc12c (OpenFab, Medialab-Prado)e37ce
    12. 12. Visions Project K.HardwareKinect sensor
    13. 13. Visions Project K.Hardware: CPU:LCD Monitor Linux Mac Windows 3D sensor (Kinect)
    14. 14. Visions Project K. Hardware: KinectMicrophone Microphone Láser IR CMOS RGB CMOS IR Motor IR pattern proyection
    15. 15. Visions Project K.Hardware: Kinect applications 1. 3D Scene 2. RGB Image 3. Body and objects recognition 4. Gesture Recognition 5. Voice command recognition 6. Tilt control
    16. 16. Visions Project K. HACKING KINECT byFLOSS
    17. 17. Visions Project K. Software: Processing in multiplatform• The device use a Kinect camera, in order to visualize people in 3-D space; It use an open source drivers• The OpenNI framework provides a set of open source APIs. These APIs are intended to become a standard for applications to access natural interaction devices.• The APIs provide support for Voice and voice command recognition; Hand gestures; Body; Motion Tracking• The software was developed with the OpenNI library for Processing;
    18. 18. Visions Project K.Software: Open Libraries 1. OpenKinect Processing, others 2. SimpleOpenNIProcessing 3. dLibsProcessing 4. OfxKinectOpenFrameworks 5. Kinect Cinder Block C/C++
    19. 19. Visions Project K.Software: OpenKinect Point Cloud
    20. 20. Visions Project K.Software: OpenKinect Average Point Tracking
    21. 21. Visions Project K.Software: OpenKinect Advantage Disadvantages 1. Easy to install 1. No recognition 2. Multiplataform 2. No Alignment 3. Easy to use
    22. 22. Visions Project K.Software: SimpleOpenNI Hands
    23. 23. Visions Project K.Software: SimpleOpenNI Gestures
    24. 24. Visions Project K.VLAB4D development OpenKinect 3D Video for VP Processing háptic interaction for VP Telepresence for SimpleOpenNI VP
    25. 25. Visions Project K.Single projection/videohologram
    26. 26. 3D projection/Point Cloud
    27. 27. Visions Project K.3D Reconstruction/ Full Color
    28. 28. Visions Project K.Haptic interaction
    29. 29. Visions Project K.Futures possibilities1. Increasing the number of applications2. Mini version for Tablet Pc, Ipad, etc..3.Use as scientific display (biology, arquitecture, engineering..etc..)
    30. 30. Visions Project K.
    31. 31. Visions Project K. THANKShttp://vlab4d.wix.com http://visionlab4d.com.ar francesca@visionlab4d.com.ar desarrollo@visionlab4d.com.ar

    ×