JOURNAL:ACM, NEW YORK
AUTHORS:JURGEN STEIMLE, ANDREAS JORDT, PATTIE
MAES
Presented by
Aiswarya Gopal(P2ELT13002)
Athira.L(P2ELT13023)
 Interactive system having a depth camera,
projector, plain paper and hand-held displays
 Track deformed surfaces from depth images
 Capture complex deformations and provide
fine details
 Prevents occlusions from fingers and hands
 Avoids use of markers
 Manipulation of real world objects for
interaction with computer systems
 Integration of physical and digital information
 Depth sensors
 Degree o freedom
 People bend pages in book
 Bending interaction
 Two main contributions
 Algorithm for capture of complex deformations
 Detection of hands and fingers
 Evaluation
 Two evaluation studies
 Deformation capturing
 Technologies used
▪ Light Space
▪ Kinect Fusion
▪ Omni touch
▪ Proxy particles
 Disadvantages
 Flexible Display Interfaces
 Two types of work
▪ Deformable sheet or tape
▪ Deformation of handheld displays
 Components used
 Kinect camera
 Projector
 Sheet of paper
 Foam or acrylic
 Flexible display material
 Any sheet can be used as passive display
 Two materials of sheet
 Kinect depth sensors
 Looks at
 Removal of hands and fingers
 Global deformation model
 How global deformation model
Figure 3: Detection of skin by analyzing the
point pattern in the Kinect infrared image.
Center: Input. Right: Classification
Figure 4: Dimensions of the deformation model (left) and examples of deformations it can express (right).Figure 4: Dimensions of the deformation model (left) and examples of deformations it can express (right).Figure 4: Dimensions of the deformation model (left) and examples of deformations it can express (right).Figure 4: Dimensions of the deformation model (left) and examples of deformations it can express (right).Figure 4: Dimensions of the deformation model (left) and examples of deformations it can express (right).Figure 4: Dimensions of the deformation model (left) and examples of deformations it can express (right).Figure 4: Dimensions of the deformation model (left) and examples of deformations it can express (right).
 Trap deformations directly observed directly
from Kinect sensors
 Trade-off between tracking stability and set
of detectable deformations
 High flexibility
 1:1 mapping
Figure 5: Exploring curved cross-sections (a), flattening the view (b), comparing contents across layers (c)Figure 5: Exploring curved cross-sections (a), flattening the view (b), comparing contents across layers (c)Figure 5: Exploring curved cross-sections (a), flattening the view (b), comparing contents across layers (c)Figure 5: Exploring curved cross-sections (a), flattening the view (b), comparing contents across layers (c)
Figure 6: Animating virtual paper characters
Figure 7: Slicing through time in a video by
deformation
 Two evaluations are done
 Tracking performance
 User performance of deformation
Figure 9: Classification of skin. Top: depth
input. Center: infrared input. Bottom: depth
image after classification with skin parts
removed.
RMS90: 2.10 mm (1.1) RMS90: 1.91 mm (1.2) RMS90: 2.67 mm (1.6)
RMS150: 2.64 mm (1.3) RMS150: 3.07 mm (1.7) RMS150: 6.1 mm (4.2)
RMS90: 4.58 mm (1.9) RMS90: 4.82 mm (2.2) RMS90: 4.93 mm (2.1)
RMS150: 5.45 mm (2.7) RMS150: 5.15 mm (2.5) RMS150: 6.38 mm (3.8)
RMS90: 4.58 mm
(1.9)
RMS90: 4.82 mm
(2.2)
RMS90: 4.93 mm
(2.1)
RMS150: 5.45 mm
(2.7)
RMS150: 5.15 mm
(2.5)
RMS150: 6.38 mm
(3.8)
 Figure 11: Average trial completion time in
seconds. Error bars show the standard
deviation.
 Summary of study findings
 Highly flexible displays
 Single and dual deformations obtained with high
precision level of +/-6 degrees
 High accuracy
 Average error was below 7mm
 Touch input on deformable displays
 Desired touch area is at the center
 Active flexible displays
 Available displays are limited
 Smart materials
 Need for deformable and stretchable
flexpad

flexpad

  • 1.
    JOURNAL:ACM, NEW YORK AUTHORS:JURGENSTEIMLE, ANDREAS JORDT, PATTIE MAES Presented by Aiswarya Gopal(P2ELT13002) Athira.L(P2ELT13023)
  • 2.
     Interactive systemhaving a depth camera, projector, plain paper and hand-held displays
  • 3.
     Track deformedsurfaces from depth images  Capture complex deformations and provide fine details  Prevents occlusions from fingers and hands  Avoids use of markers
  • 4.
     Manipulation ofreal world objects for interaction with computer systems  Integration of physical and digital information  Depth sensors  Degree o freedom  People bend pages in book  Bending interaction
  • 5.
     Two maincontributions  Algorithm for capture of complex deformations  Detection of hands and fingers  Evaluation  Two evaluation studies
  • 6.
     Deformation capturing Technologies used ▪ Light Space ▪ Kinect Fusion ▪ Omni touch ▪ Proxy particles  Disadvantages
  • 7.
     Flexible DisplayInterfaces  Two types of work ▪ Deformable sheet or tape ▪ Deformation of handheld displays
  • 8.
     Components used Kinect camera  Projector  Sheet of paper  Foam or acrylic  Flexible display material  Any sheet can be used as passive display  Two materials of sheet
  • 9.
     Kinect depthsensors  Looks at  Removal of hands and fingers  Global deformation model  How global deformation model
  • 10.
    Figure 3: Detectionof skin by analyzing the point pattern in the Kinect infrared image. Center: Input. Right: Classification
  • 11.
    Figure 4: Dimensionsof the deformation model (left) and examples of deformations it can express (right).Figure 4: Dimensions of the deformation model (left) and examples of deformations it can express (right).Figure 4: Dimensions of the deformation model (left) and examples of deformations it can express (right).Figure 4: Dimensions of the deformation model (left) and examples of deformations it can express (right).Figure 4: Dimensions of the deformation model (left) and examples of deformations it can express (right).Figure 4: Dimensions of the deformation model (left) and examples of deformations it can express (right).Figure 4: Dimensions of the deformation model (left) and examples of deformations it can express (right).
  • 12.
     Trap deformationsdirectly observed directly from Kinect sensors  Trade-off between tracking stability and set of detectable deformations
  • 13.
  • 14.
    Figure 5: Exploringcurved cross-sections (a), flattening the view (b), comparing contents across layers (c)Figure 5: Exploring curved cross-sections (a), flattening the view (b), comparing contents across layers (c)Figure 5: Exploring curved cross-sections (a), flattening the view (b), comparing contents across layers (c)Figure 5: Exploring curved cross-sections (a), flattening the view (b), comparing contents across layers (c)
  • 15.
    Figure 6: Animatingvirtual paper characters
  • 16.
    Figure 7: Slicingthrough time in a video by deformation
  • 17.
     Two evaluationsare done  Tracking performance  User performance of deformation
  • 18.
    Figure 9: Classificationof skin. Top: depth input. Center: infrared input. Bottom: depth image after classification with skin parts removed.
  • 19.
    RMS90: 2.10 mm(1.1) RMS90: 1.91 mm (1.2) RMS90: 2.67 mm (1.6) RMS150: 2.64 mm (1.3) RMS150: 3.07 mm (1.7) RMS150: 6.1 mm (4.2) RMS90: 4.58 mm (1.9) RMS90: 4.82 mm (2.2) RMS90: 4.93 mm (2.1) RMS150: 5.45 mm (2.7) RMS150: 5.15 mm (2.5) RMS150: 6.38 mm (3.8) RMS90: 4.58 mm (1.9) RMS90: 4.82 mm (2.2) RMS90: 4.93 mm (2.1) RMS150: 5.45 mm (2.7) RMS150: 5.15 mm (2.5) RMS150: 6.38 mm (3.8)
  • 20.
     Figure 11:Average trial completion time in seconds. Error bars show the standard deviation.
  • 21.
     Summary ofstudy findings  Highly flexible displays  Single and dual deformations obtained with high precision level of +/-6 degrees  High accuracy  Average error was below 7mm
  • 22.
     Touch inputon deformable displays  Desired touch area is at the center  Active flexible displays  Available displays are limited  Smart materials  Need for deformable and stretchable