SlideShare a Scribd company logo
1 of 58
Page | 1
Reach into the computer &
Grab a pixel
Introduction:
hroughout the history of computers we've been striving to shorten the gap between us and digital information, the gap
between our physical world and the world in the screen where our imagination can go wild. And this gap has become
shorter, shorter, and even shorter, and now this gap is shortened down to less than a millimeter, the thickness of a touch-
screen glass, and the power of computing has become accessible to everyone.
But I wondered, what if there could be no boundary at all? I started to imagine what this would look like. First, I
introduce this tool in below which penetrates into the digital space, so when you press it hard on the screen, it transfers its
physical body into pixels. Designers can materialize their ideas directly in 3D, and surgeons canpractice on virtual organs
underneath the screen. So with this tool, this boundary has been broken.
T
Page | 2
Introduce first Tools:-
Beyond – Collapsible Tools and Gestures for Computational Design
Abstract
ince the invention of the personal computer, digital media has remained separate from the physical world, blocked by a
rigid screen. We present Beyond, an interface for 3-D design where users can directly manipulate digital media with
physically retractable tools and hand gestures. When pushed onto the screen, these tools physically collapse and project
themselves onto the screen, letting users perceive as if they were inserting the tools into the digital space beyond the screen.
The aim of Beyond is to make the digital 3-D design process straightforward, and more accessible to general users by extending
physical affordances to the digital space beyond the computer screen.
Keywords
3D Interaction, Augmented Reality and Tangible UI, Pen and Tactile Input, Tactile & Haptic UIs, Pen-based UIs, Tangible UIs .
ACM Classification Keywords
H5.m. Information interfaces and presentation (e.g., HCI): H.5.2. Input Devices and Strategies.
General Terms
Design, Human Factors, Experimentation
Introduction
ecent developments in computer technologies have made the design process much more precise, and scalable. Despite
this powerful role the computation plays in design, many designers and architects prefer to build physical models using
physical tools and hands, employing their versatile senses and bodily expressions in their early stage of design. There has
been no straightforward way to sketch and model 3D forms on the computer screen.
S
R
Page | 3
Tangible User Interfaces have appeared as a strong concept to leverage the traditional ways of design with digital power, while
preserving physical affordances,by blurring the boundary between the physical environment and cyberspace. In an attempt to
diminish the separations between visual and tactile senses, which are critical for the design process, researchers of Augmented
Reality (AR) have suggested several input devices and ways of displaying digital information in more realistic ways. Despite these
efforts,a flat monitorand a mouse remain ourstandard interfaces fordesign, leaving digital media apart fromthe physical world
blocked by a rigid screen. It is extremely hard for users to select a certain 3-dimensional coordinate in virtual space and sense
the volume without wearing special display glasses and using complicated mechanical equipment’s.
A parallel trend in CAD, development in gestural interfaces,has allowedusers to employ bodily expressions in data manipulation.
However, simple combinations of mouse and gestures on 2D surface does not take full advantage of gestures as bodily
expressions and can hardly cover the large number of the commands necessary in design.
We present Beyond, an interface for3 dimensional design where users candirectly manipulate 3-D digital media with physically
retractable tools and natural hand gestures. When pushed onto the screen, these tools can physically retract and project them-
selves ontothe screen, letting users perceive as if they were inserting tools into the digital space beyond the screen. Ourresearch
goal is to enable users to design by simply sketching and cutting digital medium using tools, supported by gestures, without
having to look at multiple planes at the same time. We believe this effort will make digital 3-D design process straightforward,
scalable and more accessible to general users.
Related Work
arious approaches have been taken to enable users to design in 3D in with more straightforward and intuitive manners
by integrating input and output, and by providing users with tangible representations of digital media.
The concept of WYSIWYF – “what you see is what you feel” – has been suggested in the domain of Augmented Reality in
an attempt to integrate haptic and visual feedback. Technologies like stereoscopic glasses for 3-D display, holograms and
wearable mechanical devices such as phantom that provide precise haptic feedbacks are invented and experimented within this
context . However, many of these systems require users to wear devices, which are often heavy and cumbersome, intervening
natural views and limiting behaviors of the users. Sand- like beads or actuated pin-displays are examples of deformable physical
v
Page | 4
materials built in anattempt to diminish the separation between input andoutput. However, they are oftennot scalable because
solid forms embedded in physical materials are less malleable than pixels.
In an effort to convey users’ intentions more intuitively, gesture based interactive sketch tools have been suggested. Most of
the systems are based on pen-stroke gesture input, whose functions are limited to simple instant ones such as changing the
plane and erasing objects. Oblong’s g-speak is a novel gestural interface platform that supports varieties of gestures and
applications including 3-D drawings. However it is still a hard
task to select a certaincoordinate inarbitrary space with these
systems.
What is beyond?
eyond is a design platform that allows users to employ
their gestures and physical tools beyond the screen in 3-
D computational design. Collapsible tools are used in
Beyond so that those can retract and project itself onto the
screen, letting users perceive asif they were inserting the tools
into the screen. This design enables users to perceive
workspace in the screen as 3-D space within their physical
reach, where computational parametric operationand human
direct manipulation can occur together. As a result, this
interface helps users design and manipulate digital media,
with affordances they have with physical tools (figure 1).
Another significant design decision was use of 3-D
gestures. We came to the conclusion that gestures in 3D effectively helps users to convey their intentions that are abstract but
at the same time related to spatial orshape-related senses. Types of gestural commands used in the Beyond are discussed more
in details in the interaction section.
B
Figure 1: Direct manipulation of digital information beyond the screen with collapsible
tools
Page | 5
Beyond Prototype:
he current Beyond prototype consists of retractable tools, a table-top display and an infrared position tracking system. The
tools are designed to retract and stretch with two IR retro-reflective markers attached on both tips. As illustrated in figure
3, Vicon system composed of IR emitters and cameras is used to track these markers, letting the system obtain information
about the location, length and tilt of the tools. An additional marker is attached to the users’ head for real time 3D rendering.
We implemented two kinds of collapsible tools for the first prototype of Beyond: Pen and Saw (figure 2).
Pen:
The pen serves as a tool for drawing. This passive tool can specify any 3D
coordinate in virtual space within its reach and draw shapes and lines.
Saw:
The saw serves as a tool for cutting and sculpting. It is designed to provide several
different forms of physical actuation when users touched virtual objects.
Gestures Beyond uses several gestural interaction techniques mediated by gloves
tagged with IR reflective markers tracked by Vicon tracking technologies, which
are developed by oblong industries.
3-D rendering techniques based on users’ head position In order to render the
scene where physical and digital portions are seamlessly connected, we
implemented a software to rear-ender 3D scenes based on users’ head position in
real-time. This helps users perceive the objects rendered on the flat screen as 3-D object put behind the
screen.
T
Figure 3: Mechanism for tracking users ' head
and tools' positions.
Figure 2:Collapsible pen (Top) and saw
(bottom).
Page | 6
Interaction with Beyond
Direct Selection and Drawing
Beyond allows users to directly select certain 3D specific coordinates within its physical reachof physical tools in
virtual 3D space without looking at multiple planes or wearing head mounted display. By doing so it allows users to
sketch in 3D shapes in a straightforward manner, help them externalizing their 3D images in their minds.
Touching and Cutting
Using Beyond-Saw, users can cut and trim any surface or shape by simply inserting the saw tool into the virtual
space. When virtual objects are touched or cut by a tool, a slide actuator installed inside the saw tool creates force
feedbacks, preventing the tool fromretracting. By doing so users can interact with digital media with better sense
of volume and material properties of virtual objects.
Gestural Interactions with Beyond
Gestural commands effectively complement tools- mediated direct manipulation by conveying users’ intention to
the system in intuitive manners. Users can define several different abstract shapes and operate functions while
directly specifying coordinates with the collapsible tools. Figure 4 shows a few examples among several types of
gestures. The current Beyond prototype provides shape-related gestural commands such as straight line, square,
ellipse and function- related gestural commands including extrude, lock the drawing surface and move objects. For
example users can do “straight” gestures to make lines they draw straight, and “extrude” gestures to extrude
certain surfaces (Figure 5).
New Work flow for 3-D Design
Several interaction techniques illustrated in the previous sections can be merged and weaved together to create a
new workflow for 3-D computer-aided design as followings. First, users can sketch rough design in 3D with free-line
drawing techniques. The next step is to define discreet shape on top of its quick- sketch by specifying locations and
Figure 4: Gestures
used in Beyond
Prototype.
Page | 7
other critical parameters with tools and to operate the functions with gestures. In the middle of the design process users can
always modify its design by using other types of tools.
User Evaluations
ince the project is currently at its initial stage, we are planning to conduct comprehensive user evaluations in the future.
However, overall feedback we have received forseveral weeks have been that the Beyond platform helps users sketch
and model the 3D shape they have in mind and that gestural commands complement directness of tools-based
interactions decreasing ambiguities.
Discussion and Future
ork we introduced Beyond, a design platform that enables users to interact
with 3D digital media using physically collapsible tools that seamlessly go
into the virtual domain by simple mechanism of collapse, projection, and
actuation. Initial user evaluations showed that applying natural hand gestures to
convey abstract intention of users greatly complements tools- mediated direct
manipulation. We presented the design and implementation of the first Beyond
prototype that used Vicon locationtracking system and the physically collapsible pen
and saw.
The Beyond allows users to directly select certain 3- dimensional coordinate in virtual
space. We believe this challenge will help more diverse range of users to access to the
technology. While many of the AR or TUI approaches to leverage computational design do not scale up well or too application
specific due to inherent rigidity ofphysical handles, the Beyondplatformshows potentials
S
W
Figure 5: Rough 3D sketch (top) and abstract shape drawing
(bottom) using gestures
Page | 8
to be a more scalable and generalizable user interfaces by seamlessly transforming rigid physical parts to flexible pixels.
Since the Beyond is at its initial stage ofdevelopments, we foreseeseveral improvements to Beyond. First,
the entire system can be more portable and low-cost by using simple touch screen and camera based
tracking technologies. Secondly, we plan to develop more comprehensive gestural languages applicable
to design process combined with direct pointing tools. We also plan to improve the force feedback of the
active tools by using series elastic actuator, which can create more precise and varieties of force profiles
allowing the system to express tactile feedback of various material properties. Finally, we are planning to
conduct extensive user evaluations on the system in the near future.
Inventor of Beyond:
Jinha Lee
MIT Media Laboratory 75 Amherst St. Cambridge, MA 02139 USA
And
Hiroshi Ishii
MIT Media Laboratory 75 Amherst St. Cambridge, MA 02139 USA
Figure6: A more portable version of
Beyond prototype, using camera based
tracking and a touch screen
Page | 9
Introduction about second tools:
ur two hands still remain outside the screen. How can you reach inside and interact with the digital information using
the full dexterity of our hands? At Microsoft Applied Sciences redesigned the computer and turned a little space above
the keyboard into a digital workspace. By combining a transparent display and depth cameras for sensing your fingers
and face,now you can lift up your hands from the keyboard and reach inside this 3D space and grab pixels with our bare hands.
Because windows and files have a position in the real space, selecting them is as easy as grabbing a book off your shelf.
Then you can flip through this book while highlighting the lines, words on the virtual touch pad below each floating window.
Architects can stretch or rotate the models with their two hands directly. So in these examples, we are reaching into the digital
world.
Explain below:-
O
Page | 10
Introduce second Tools:-
SpaceTop: Integrating 2D and Spatial 3D Interactions in a See-through
Desktop Environment
ABSTRACT
paceTopis a concept that fuses 2D and spatial 3D interactions in a single desktop workspace. It extends the traditional
desktop interface with interaction technology and visualization techniques that enable seamless transitions
between 2D and 3D manipulations. SpaceTop allows users to type, click, draw in 2D, and directly manipulate
interface elements that float in the 3D space above the keyboard. It makes it possible to easily switch fromone modality
to another, or to simultaneously use two modalities with different hands. We introduce hardware and software configurations
for co-locating these various interaction modalities in a unified workspace using depth cameras and a transparent
S
a) b) c) d)
Figure 1: SpaceTopaffordsa) 3Ddirect spatialinteraction,b) 2D direct touch,c) 2D indirect interaction,andd) typing.Itaims
to meld seamsbetweenthesemodalitiesby accommodatingtheminthe sameunifiedspaceandenablingfastswitching
betweenthem
Page | 11
display. We describe new interaction and visualization techniques that allow users to interact with 2D elements floating in 3D
space and present the results from a preliminary user study that indicates the benefit of such hybrid workspaces. Author
Keywords: 3D UI; Augmented Reality; Desktop Management ACM ClassificationKeywords H.5.m. Informationinterfaces and
presentation (e.g., HCI) General Terms Human Factors; Design; Measurement.
INTRODUCTION
esktop computing today is primarily composed of 2D graphical user interfaces (GUI) based on a 2D screen with input
through a mouse or a touchscreen. While GUIs have many advantages, they can constrain the user due to the limited
screen space and interaction bandwidth, and there exists situations where users can benefit from more expressive
spatial interactions. For instance, switching between overlapping windows on a 2D screen adds more cognitive loadthan
arranging a stack of physical papers in 3D space . While there has been advances in sensing and display technologies, 3D spatial
interfaces have not been widely employed in everyday computing. Despite advantages fromspatial memory and increased
expressiveness, potential issues related to precision and fatigue make 3D desktop computing challenging.
We present SpaceTop, an experimental prototype that brings 3D spatial interaction space to desktop computing environments.
We address the previously mentioned challenges in three interdependent ways. First, SpaceTop accommodates both
conventional and 3D spatial interactions in the same space. Second, we enable users to switch between 3D I/O and
conventional 2D input, or even use them simultaneously with both hands. Finally, we present new interaction and visualization
techniques to allow users to interact with 2D elements floating in 3D space. These techniques aim to address issues and
confusion that arise from shifting between interactions of different styles and dimensions.
RELATED WORK
revious work has explored 2.5D and 3D representations to better support spatial memory in desktop. Augmented
Reality systems exploit the cognitive benefits of co-locating3D visualizations with direct input in a real environment,
using optical combiners .This makes it possible to enable unencumbered 3D input to directly interact with situated 3D
graphics in mid-air.SpaceTop extends these concepts with an emphasis on streamlining the switching between input modalities
D
P
Page | 12
in a unified I/O space, and the combination of such 3D spatial interaction with other conventional input modalities, to enable
new interaction techniques.
Other related research explores the transitions between 2D and 3D I/O by combining multi-touch with 3D direct interaction ,
or through 2D manipulation of 3D stereoscopic images , with an emphasis on collaborative interaction with 3D data, such as
CAD models. Our work focuses on how daily tasks, such as document editing or task management, canbe better designed with
3D spatial interactions in existing desktop environments.
SPACETOP IMPLEMENTATION
t first glance, SpaceTop looks similar to a conventional desktop
computer, except the transparent screen with
keyboard/mouse behind it. Users place their hands behind the
screen to scroll on the bottom surface, or type on the keyboard.
Through the transparent screen, users can view graphical interface
elements appearing to float, not only on the screen plane, but also in
the 3D space behind it or on the bottom surface. The users can lift
their hands off the bottom surface to grab and move floating windows
or virtual objects using “pinch” gestures.
We accommodate 3D direct interaction, 2D touch and typing, with an
optically transparent LCD screen and two depth cameras (Figure 2) in a 50×25×25 cm3 volume.
Display: Prototype LCD with per-pixel transparency
We use a display prototype by Samsung, designed to show graphics without backlights in contact with its transparent LCD. The
22" transparent LCD displays 1680×1050 pixels images at 60 Hz with 20% light transmission. It provides maximum transparency
for white pixels, and full opaqueness forblack pixels. We use the unique per-pixel transparency to control the opacity of
A
Page | 13
graphical elements, allowing us to design UIs that do not suffer from the limitations of half-silver mirror setups, where pixels
are always partially transparent. We ensure that all graphical elements include clearly visible opaque parts, and use additional
lights for the physical space behind the screen, to improve the visibility of the user’s hands and keyboard.
Head and hand tracking with depth cameras
One depth camera (Microsoft Kinect) faces the user and tracks the head to enable motion parallax. This allows the user to view
graphics correctly registered on top of the 3D interaction space wherein the hands are placed. Another depth camera points
down towards the interaction space and detects position and pinch-gestures of the user’s hands . The setup also detects if the
user’s hands are touching the 2D input plane based on the technique described in .
Page | 14
INTERACTION AND VISUALIZATION
2D in 3D: Stack Interaction
In SpaceTop, graphical UI elements are displayed on the screen or in the 3D space behind it. Inour scenarios, details or 2D
views of 3D elements are shown on the foreground plane
(coinciding with the physical screen). While objects can take
various forms in 3D space, we chose to focus on window
interaction and 2D content placed in 3D space, such that the
system canbe used forexisting desktop tasks. Another advantage
of the window form factorin 3D is that it saves space when
documents are stacked. It can, however, become challenging to
select a particular window fromthe dense stack.
We designed various behaviors of stacks and windows to ease
retrieval in stacks, as illustrated in Figures 3a–f. Users candrag-and-drop a window fromone stack to another, to cluster it. As
the user hovers his finger inside a stack, the layer closest to the user’s finger gets enlarged and more opaque. When the user
pinches on the stack twice, the dense stack expands to facilitate selection. The surface area below the stack is used for 2D
gestures, such as scrolling. Users can, forexample, scroll on the bottom surface of the stack to change the order of the
documents in the stack. We designed a Grid and Cursor system to simplify the organization of items in 3D. It provides windows
and stacks with passive reference cues, which help guide the user’s hands. The cursor is represented as two orthogonal lines
parallel to the ground plane that intersect at the user’s finger tips. These lines penetrate the grid box that represents the
interaction volume, illustrated in Figure 3a.
Modeless Interaction
Our guiding principle fordesigning high-level interfaces and visualizations is to create a seamless and modeless workflow.
Experiments have shown that when users shift from one interaction mode to another, they have to be visually guided with
care, such that the user can mentally accommodate the new interaction model. Particularly, smooth transition between
Page | 15
2D and 3D views, and between indirect and direct interactions are
challenging, since each of them is built on largely different mental
models of I/O.
Sliding Door: Entering the Virtual 3D space
In the 2D interaction mode, the user can type or use a mouse
or touchpad to interact with SpaceTop, as in any conventional 2D
system. When the user lifts her hands, the foreground window slides
up or fades out to reveal the 3D space behind the main window.
When the hands touch the bottom surface again, the foreground
window slides down again, allowing users to return to 2D-mapped
input. The sliding door metaphor can help users smoothly shift focus from the “main” 2D document to “background” contents
floating behind (See Figures 3b-c).
Shadow Touchpad: One Touchpad per window
Touchpad interaction with 2D windows floating in 3D space introduces interesting challenges. Especially when working with
more than one window, it is not straightforward how to move a cursor from one window to another. Indirect mapping
between the touchpad and the window canconflict with the direct mapping that each window is forming with the 3D space.
To address this issue, we propose a novel concept called Shadow touchpad, which emulates a 2D touchpad below each of the
tilted 2D documents floating in 3D space. When a window is pulled up, a shadow is projected onto the bottom surface, whose
area functions as a touchpad that allows the user to interact with that window. When multiple screens are displayed, each of
them has its own shadow touchpad area.
Page | 16
Inter-shadow translation of 2D element
Users can move 2D objects (e.g., text and icons) from one window to another by dragging the object between the
corresponding shadow areas. The object will be visualized as a floating 3D object during the transition between the two
shadow touchpads, similarly to the balloon selection technique, as shown in Figure 5.
Task Management Scenario
Effective management of multiple tasks has been a central challenge in everyday desktop computing. In SpaceTop, the
background tasks occupy a fixed position in the 3D space behind the main task, allowing users to rely on their spatial memory
to retrieve them. This spatial persistence mitigates some of the cognitive load associatedwith conventional task management
systems. Sliding door or stack interaction can be directly applied to categorize, remember, and retrieve tasks (Figure 4).
Bimanual, Multi-fidelity Interaction
Interesting interactions arise when each hand is interacting in different styles and fidelities. The following applications
demonstrate the potential of such bimanual, multi-fidelity interaction.
Document Editing Scenario
When composing a document, the user often needs to copy portions from other documents, such as, previous drafts or
outside sources. SpaceTop allows the user to use the dominant hand to scroll through a main document, while
simultaneously using the other hand to quickly flip through a pile of other documents, visualized in 3D space, to find a relevant
piece of text. The user can then drag that piece of text into the main document through the more precise touchpad
interaction. In this way, SpaceTop allows users to quickly switch back-and-forth between low-bandwidth , high-precision
interactions (copying lines) and high bandwidth, low-precision interactions (rifling through documents), or use them
simultaneously.
Page | 17
3D Modeling Scenario
While 3D spatial interactions provide means forthe user to materialize
their design through spatial expression, much of the interaction in CAD
require precise manipulation and is controlled in 2D. SpaceTop allows
for natural transitions between these interactions modes. The user can
start prototyping a model with free-formmanipulation. Once fine
control is required, the user can select a surface of the 3D model and pull
up an editing console to the foreground of the screen. The user can then
precisely modify dimensions by dragging a side, typing a number, or
choose material properties by touching a 2D palette on the ground.
PRELIMINARY USER EVALUATION
en participants (age 19–29, 2 female) were recruited from a university mailing list, none of whom had previous
experience with 3D user interfaces. They were able to familiarize themselves with the system until they performed
each actioncomfortably (3–6 min). The total experiment time forparticipants was between 70-80 min.
Switching between indirect 2D vs. direct 3D interaction
12 partially overlapping coloredwindows (red, green, blue, or yellow), containing a shape (triangle, square or star), were
shown. Participants were given tasks, such as “grab the yellow square and point to its corners”, or “trace the outline
of the blue triangle”. They performed four different, randomized tasks for three spatial window configurations, for a total of 12
trials for eachof two blocks. The SpaceTopblock used spatial window placement with head-tracking and participants used a
combination of gesture, mouse and keyboard interaction, for constant switching between typing, 2D selection and 3D
interaction. In the baseline block, windows were shown in the display’s 2D plane and only mouse and keyboard interaction
was available. Questionnaire responses (5-point Liker scale) indicate that the SpaceTop interactions were easy-to-learn(3.9).
Participants did however find it slower (3.2 vs. 4.2) and less accurate (3.2 vs. 4.6) than the baseline. Users’ comments include
T
Page | 18
“afterI repeated this task three times (with the same arrangement), my arm starts moving towards the target even before
I see it”, “switching to another window is as simple as grabbing another book on my (physical) desk”. Another user
commented that the physical setup constrains his arm’s movement which makes him exhausted easier.
Text editing: Search and copy/paste
Participants skimmed the contents of six different document pages placed in the 3D environment. They were then asked to
find a specific word and pick-and-drop it into the document on the foreground screen (see Figure 5). Six participants
commented that it felt compelling to be able to quickly rifle through a pile of documents with one hand while another hand is
interacting with the main active task. One user commented: “it feels like I have a desktop computer and a physical book next
to it”. “This feels like a natural role division of right/left hand in the physical world”. Three users reported that they had a
hard time switching their mental models from 2D indirect mapping (touchpad) to 3D direct mapping (spatial interaction),
which occurs when the user tries to drag a word out of a shadow.
DISCUSSION
sers’ comments suggest that fast switching and bi-manual interaction provide compelling experiences, and that
they canbenefit from spatial memory (task 1). We also gained some insights forfuture improvements. A few users
also commented that they might perform better with a stereoscopic display, in addition to the aid of the grid
and cursor. Although previous work indicates that stereoscopy has limited benefit over horoscopic display with motion
parallax , we plan to also explore a stereoscopic version of SpaceTop. We think that the visual representation could be better
designed to provide users with clearer guidance. While the current configuration allows us to rapidly prototype and
explore interactions, we plan to improve ergonomics and general usability with careful design of the physical setup.
CONCLUSIONS AND FUTURE WORK
U
Page | 19
paceTop is a concept that accommodates 3D and conventional 2D (indirect/direct) interactions in a single
workspace. We designed interaction and visualization techniques for melding the seams between different
interaction modalities and integrating them into modeless workflows. Our application scenarios showcase the power
of such integrated workflows with fast switching between interactions of multiple fidelities and bimanual interactions. We
believe that SpaceTop is the beginning of an exploration of a larger field of spatial desktop computing interactions and
that our design principles can be applied to a variety of current and future technologies. We hope that this exploration offers
guidelines for future interaction designers, allowing better insight into the evolution of the everyday desktop experience.
Inventor of SPACETOP
Jinha Lee
MIT Media Laboratory
And
Microsoft Applied Sciences Group
Hiroshi Ishii
MIT Media Laboratory
Alex Olwal
MIT Media Laboratory
Cati Boulanger
Microsoft Applied Sciences Group
S
Page | 20
Introduction about thread tools:
ow about reversing its role of previous tools and having the digital information reach us instead? I'm sure many
of us have had the experience of buying and returning items online. But now you don't have to worry about
it. What I got here is an online augmented fitting room. This is a view that you get from head-mounted or see-
through display when the system understands the geometry of your body.
This is upcoming project ortools. So, I get some imitated information this information is given below:
Introduce thread Tools:-
WYSIWYF –“what you see is what you feel”
ABSTRACT
he digital information reach us instead? I'm sure many of us have had the experience of buying and returning
items online. But now you don't have to worry about it. What I got here is an online augmented fitting room. This
is a view that you get from head-mounted or see-through display when the system understands the geometry of
your body.
Keywords
3D Interaction, Augmented Reality and Tangible UI,Screen base and mobile Input, Tactile & Haptic UIs,and Transient -
based UIs,Tangible UIs, online shopping, buying and returning items, online augmented fitting room, fitting room.
General Terms
Online augmented fitting room, Human Factors, Experimentation.
H
T
Page | 21
INTRODUCTION
'm sure many of us have had the experience of buying and returning items online. But now you don't have to worry about
it. What I got here is an online augmented fitting room. This is a view that you get from head-mounted or see-through
display.
The concept of WYSIWYF –“what you see is what you feel” –has been suggested in the domain of Augmented Reality in an
attempt to integrate haptic and visual feedback . Technologies like stereoscopic glasses for 3-D display.
RELATED WORK
arious approaches have been
taken to enable users to design in
3D in with more straightforward
and intuitive manners by integrating
input and output, and by providing users
with tangible representations of digital
media .Beyond is a design platform that
allows users to employ their gestures
and physical tools beyond the screen in
3-D computational design. Collapsible
tools are used in Beyond so that those
can retract and project itself onto the
screen, letting users perceive as if they
were inserting the tools into the screen.
This design enables users to perceive
workspace in the screen as 3-D space
I
V
Page | 22
within their physical reach, where computational parametric operation and human direct manipulation can occurtogether. As
a result, this interface helps users design and manipulate digital media, with affordances they have with physical tools.
SpaceTop See Through 3D desktop is a 3D spatial operating environment that allows the user to directly interact with his or her
virtual desktop. The user can reach into the projected 3D output space with his/her hands to directly manipulate the windows.
Users can casually open up the See-Through 3D Desktop and Type on the keyboard or use track pad as in traditional 2D
operating environment. Windows or files are perceived to be placed in a 3D space between a screen and the input plane. The
user can lift up his hands to reach the displayed windows and arrange them in this 3D space. A unique combination of a
transparent display and 3D gesture detection algorithm collocates input space and3D rendering without tethering or
encumbering users with wearable devices. See-through 3D desktop is a term for the entire ensemble of necessary software
hardware and design technological components for realizing this volumetric operating environment.
What is WYSIWYF?
he concept of WYSIWYF –“what you see is what you feel” –has been suggested in the domain of Augmented Reality in
an attempt to integrate haptic and visual feedback . Technologies like stereoscopic glasses for3-D display. I'm sure many
T
Page | 23
of us have had the experience of buying and
returning items online. But now you don't have
to worry about it. What I got here is an online
augmented fitting room. This is a view that you
get fromhead-mounted orsee-through display.
WYSIWYF IMPLEMENTATION
Display: Prototype LCD with per-
pixel transparence
We use a display prototype by Samsung,
designed to show graphics without backlights in
contact with its transparent LCD. The 22"
transparent LCD displays 1680×1050 pixels
images at 60 Hz with 20% light transmission. It
provides maximum transparency for white
pixels, and full opaqueness forblack pixels. We use the unique per-pixel transparency to control the opacity of graphical
elements, allowing us to design UIs that do not suffer from the limitations of half-silver mirror setups, where pixels are always
partially transparent. We ensure that all graphical elements include clearly visible opaque parts, and use additional lights for
the physical space behind the screen, to improve the visibility of the user’s hands and keyboard.
Hand tracking with depth cameras
One depth camera (Microsoft Kinect) faces the user and tracks the hand to enable motion parallax. This allows the user to view
graphics correctly registered on top of the 3D interaction space wherein the hand are placed.
Page | 24
3D model enable shopping
website:
Use 3D model of item enable shopping
website which are sell.
DISCUSSION
Users’ comments suggest that fast switching
and bi-manual interaction provide compelling
experiences, and that they canbenefit from
spatial memory (task 1). We also gained some
insights forfuture improvements. A few
users also commented that they might
perform better with a stereoscopic display,
in addition to the aid of the grid and cursor.
Although previous work indicates that
stereoscopy has limited benefit over horoscopic display with motion parallax ,. We think that the visual representation
could be better designed to provide users with clearer guidance. While the current configuration allows us to rapidly
prototype and explore interactions, we plan to improve ergonomics and general usability with careful design of the
physical setup.
Page | 25
CONCLUSIONS AND FUTURE WORK
YSIWYF is a concept that
accommodates 3D in a single
workspace. We believe that
SpaceTop is the beginning of an exploration
of a larger field of buyer and customer
interactions and that our design principles can
be applied to a variety of current and future
technologies. We hope that this exploration
offers guidelines forfuture interaction
designers, allowing better insight into the
evolution of the everyday online shopping
experience.
Inventor of WYSIWYF:
Jinha Lee
MIT Media Laboratory 75 Amherst St. Cambridge, MA 02139 USA
And
Daewung Kim(Collaboration)
MIT Media Laboratory 75 Amherst St. Cambridge, MA 02139 USA
W
Page | 26
Introduction about forth tools:
aking this idea further, I started to think, instead of just seeing these pixels in our space, how can we make it physical so
that we can touch and feel it? What would such a future look like? At MIT Media Lab, created this one physical pixel.
Well, in this case, this spherical magnet acts like a 3D pixel in our space, which means that both computers and people
can move this object to anywhere within this little 3D space. What we did was essentially canceling gravity and controlling the
movement by combining magnetic levitation and mechanical actuation and sensing technologies. And by digitally
programming the object, we are liberating the object fromconstraints of time and space, which means that now, human
motions can be recorded and played back and left permanently in the physical world. So choreography can be taught physically
over distance and Michael Jordan's famous shooting can be replicated over and over as a physical reality. Students canuse this
as a tool to learn about the complex concepts such as planetary motion, physics, and unlike computer screens or textbooks,
this is a real, tangible experience that you cantouch and feel, and it's very powerful. And what's more exciting than just turning
what's currently in the computer physical is to start imagining how programming the world will alter even our daily physical
activities.
T
Page | 27
Introduce forth Tools:-
ZeroN: Mid-Air Tangible Interaction Enabled by Computer Controlled
Magnetic Levitation
ABSTRACT
eroN, a new tangible interface element that can be levitated and moved freely by computer in a three dimensional
space. ZeroN serves as a tangible representation of a 3D coordinate of the virtual world through which users can see,
feel, and control computation. To accomplish this we developed a magnetic control system that can levitate and actuate
a permanent magnet in a predefined 3D volume. This is combined with an optical tracking and display system that projects
images on the levitating object. We present applications that explore this new interaction modality. Users are invited to place
or move the ZeroN object just as they canplace objects on surfaces. For example, users can place the sun above physical
Z
Page | 28
objects to cast digital shadows, orplace a planet that will start revolving based on simulated physical conditions. We describe
the technology, interaction scenarios and challenges, discuss initial observations, and outline future development.
ACM Classification:
H5.2 [Informationinterfaces and presentation]: User Interfaces.
General terms:
Design, Human Factors
Keywords:
Tangible Interfaces, 3D UI.
INTRODUCTION
angible interfaces attempt to bridge the gap between virtual and physical spaces by embodying the digital in the
physical world . Tabletop tangible interfaces have demonstrated a wide range of interaction possibilities and utilities.
Despite their compelling qualities, tabletop tangible interfaces share a common constraint. Interactionwith physical
objects is inherently constrained to 2D planar surfaces due to gravity. This limitation might not appear to be a constraint for
many tabletop interfaces, when content is mapped to surface components, but we argue that there are exciting possibilities
enabled by supporting true 3D manipulation. There has been some movement in this direction already; researchers are
starting to explore interactions with three-dimensional content using space above the tabletopsurfaces . In these
scenarios input can be sensed in the 3D physical space, but the objects and rendered graphics are still bound to the
surfaces. Imagine a physical object that can float,seemingly unconstrained by gravity, and move freely in the air. What would
it be like to leave this physical object at a spot in the air, representing a light that casts the virtual shadow of an architectural
model, or a planet which will start orbiting? Our motivation is to create such a 3D space, where the computer can control the
T
Page | 29
3D position and movement of gravitationally unconstrained physical objects that represent digital information. we present a
system fortangible interaction in mid-air 3D space. At its core, our goal is to allow users to take physical components of
tabletop tangible interfaces off .The surface and place them in the air. To investigate these interaction techniques, we created
our first prototype with magnetic levitation technology. We call
this new tangible interaction element ZeroN, a magnetically
actuated object that can hover and move in an open volume,
representing digital objects moving through 3D coordinates of
the virtual world. Users can place or move this object in the air to
simulate or affect the 3D computational process represented as
actuation of the object as well as accompanied graphical
projection. We contribute a technical implementation of
magnetic levitation. The technology includes stable long-range
magnetic levitation combined with interactive projection, optical
and magnetic sensing, and mechanical actuation that realizes a
small ‘anti-gravity space’. In the following sections, we describe
our engineering approach and the current limitations as well as a road map of development necessary to scale the
current interface.
We investigate novel interaction techniques through a set of applications we developed with ZeroN. Based on reflection
from our user observation, we identify design issues and technical challenges unique to interaction with this untethered
levitated object. In the followingdiscussion, we will refer to the levitated object simply as ZeroN and the entire ensemble as
the ZeroN system.
Page | 30
RELATED WORK
ur work draws upon the literature of Tangible Interfaces, 3D display and interaction techniques. As we touch upon
the evolution of tabletop tangible interfaces, we review movements towards employing actuationand 3D space in
human computer interaction.
Tabletop Tangible Interfaces
nderkoffler and Patten have shown how the collaborative manipulation of tangible input elements by multiple users
can enhance task performance and creativity in spatial applications, such as architecture simulation and supply chain
optimization. Reactable, AudioPad, or Datatiles show compelling qualities of bimanual interaction in dynamically
arranging visual and audio information.
In previous tabletop tangible interfaces, while users can provide input by manipulating physical objects, output occurs only
through graphical projection. This can cause inconsistency between physical objects and digital information when the state of
the underlying digital system changes. Adding actuation to an interface, such that states of physical objects are coupled with
dynamically changing digital states will allow the computer to maintain consistency between the physical and digital states of
objects.
In Actuated Workbench, an array of computer controlled electromagnets actuates physical objects on the surface, which
represent the dynamic status of computation. Planar Manipulator or Augmented Coliseum achieved similar technical
capabilities using robotic modules. Recent examples of such actuated tabletop interfaces include midget, a system that has the
capability of actuating complex tangibles composed of multiple parts. Patten’s PICO has demonstrated how physical actuation
can enable users to improvise mechanical constraints to add computational constraint in the system.
O
U
Page | 31
Going Higher
ne approach to the transition of 2D modalities to 3D has been using deformable surfaces as input and output.
Illuminating Clay employs deformable physical material as a medium of input where users candirectly manipulate the
state of the system. In Lumino, stackable tangible pucks are used to express discrete height as another input modality.
While in this system the computer cannot modify physical representation, there has been research in adding height as another
output component to RGB pixels using computer controlled actuation. Poupyrev, et.al provide an excellent overview of shape
displays. To actuate deformable surfaces, Lumen and FEELEX employ an array of motorized sticks that can be raised. Art+com’s
kinetic sculpture actuates multiple spheres tethered with string to create the silhouette of cars. Despite their compelling
qualities as shape display, they share two common limitations as interfaces. First, input is limited by the push and pull of
objects, whereas more degrees of freedom of input may be desired in many applications; users might also want to push or
drag the displayed object laterally. More importantly, because the objects are physically tethered, it is difficult forusers to
reach under or above the deformable surface in the interactive space.
Using Space above the Tabletop surface
illiges and et al. show that 3D mid-air input can be used to manipulate virtual objects on a tabletop surface using the
second-light infrastructure . Grossman et.al introduced interaction techniques with 3D volumetric display. While they
demonstrate a potential approach of exploiting real 3D space as an input area, the separation of a user’s input from
the rendered graphics does not afforddirect control as in the physical world, and may lead to ambiguities in the interface. A
remedy for this issue of I/O inconsistency may come from technologies that display free-standing volumetric images, such as
digital holography. However these technologies are not yet mature, and even when they can be fully implemented, direct
manipulation of these media would be challenging due to lack of a persistent tangible representation.
Haptic and Magnetic Technologies for 3D Interaction
O
H
Page | 32
Studies with haptic devices, such as Phantom, have shown that accurate force feedback can increase task performance in the
context of medical training and 3D modeling. While most of these systems were used with a single monitor or head-mounted
display, Plesniak’s system lets users directly touch a 3D holographic display to obtain input and output coincidences. Despite
their compelling practical qualities, tethered devices constrain the degree of freedom in user input. In addition, constraining
the view angle often isolates the user from real world context and restricts multi-user scenarios.
Magnetic levitation has been researched in the realms of
haptic interfaces and robotics to achieve increased degrees of
freedom. Berkelman and et al. developed a high-performance
magnetic levitation haptic interfaces to enable the user to
better interact with simulated virtual environments. Since their
system was designed to be used as a haptic controller of
graphical displays, the emphasis was on creating accurate
force feedback with a stable magnetic field in a semi-enclosed
hemispherical space. On the other hand, our focus was on
achieving a collocatedI/O by actuating an I/O object along the
3D paths through absolute coordinates of the physical space.
Consequently, more engineering efforts were made to actuate
a levitated object in an open 3D space in a reasonably stable
manner. 3D and Tangible Interaction
Grossman and Wigdor present an excellent taxonomy and
framework of 3D tabletop interfaces based on the dimensions of display and input space. Our work aims to explore a realm
where both display and input occur in 3D space, mediated by a computer-controlled tangible object, and therefore enabling
users’ direct manipulation. In the taxonomy, physical proxy was considered an important 2D I/O element that defines user
Page | 33
interaction. However, our work employs a tangible proxy as an active display component to convey 3D information. Therefore,
to fully understand the implication of the work, it is necessary to create a new framework based on spatial properties of
physical proxies in tabletop interfaces. We plotted existing tabletop interfaces in figure 3 based on the dimension of the I/O
space and whether the tangible elements can be actuated.
Explores this novel design space of tangible interaction in the mid-air space above the surface. While currently limited in
resolution and practical quality, we look to study what is possible by using mid-air 3D space fortangible interaction. We aim to
create a system where users can interact with 3D information through manipulating tethering by mechanical armatures or
requiring users to wear an optical device such as a head-mounted display.
OVERVIEW
Our system operates over a volume of 38cm x 38cm x 9cm, in which it can levitate, sense, and control the 3D position of the
ZeroN, a spherical magnet with a 3.17cm diameter covered with plastic shell onto which digital imagery can be projected. As a
result, the digital information bound with a physical object can be seen, felt, and manipulated in the operating volume without
requiring users to be tethered by mechanical armatures or to wear optical devices. Due to the current limitation of the
levitation range, we made the entire interactive space is larger than this ‘anti-gravity’ space, such that users can interact with
ZeroN with reasonable freedom of movement
Page | 34
TECHNICAL IMPLEMENTATION
The current prototype comprises five key elements as illustrated in figure 4.
• A magnetic levitator(a coildriven by PWM signals) that suspends amagnetic
object and is capable of changing the object's vertical suspension distance
on command.
• A 2-axis linear actuationstage that laterally positions the magnetic levitator
and one additional linear actuator for moving the coil vertically.
• Stereo cameras that track Zero’s 3D position.
• A depth camera to detect users’ hand poses.
• A tabletop interface displaying a scene coordinated with the position of the
suspended object and other objects placed on the table.
Untethered 3D Actuation
The ZeroN system implements untethered 3D actuation of a physical object with
magnetic control and mechanical actuation. Vertical motion was achieved by
combining magnetic position control which can levitate and move a magnet
relative to the coil, and mechanical actuation that can move the entire coil
relative to the entire system. Two approaches complement each other. Although
the magnetic approach can control the position with lower latency and implies
promising direction forscalable magnetic propulsion technology, the prototype
with pure magnetic controls demonstrated limits in its range: when the permanent magnet gets too close to the coil it
becomes attached to the coil even when the coil is not energized. 2D lateral motion was achieved with a plotter using two
stepper motors. Given a 3D path as input, the system first projects the path on each dimension, and linearly interpolates the
Page | 35
dots to create a smooth trajectory. Then the system calculates velocity and accelerationof eachaxis of actuation as a function
of time. With this data, the system can actuate the object along a 3D path approximately identical to the input path.
Magnetic Levitation and Vertical Control
We have developed a custom electromagnetic suspension system to provide robust sensing, levitation, and vertical
Control. It includes a microcontroller implementing a proportional-integral-derivative (PID) control loop with parameters that
can be set through a serial interface. In particular, Zero’s
suspension distance is set through this interface by the UI
coordinator. The PID controller drives the electromagnet
through a coil driver using pulse- width modulation (PWM).
The field generated by the electromagnet imposes an
attractive (or repulsive) force on the suspended magnetic
object. By dynamically canceling gravity by exerting a magnetic
force on ZeroN, the control loop keeps it suspended at a given
distance from the electromagnet. This distance is determined
by measuring the magnetic field immediately beneath the
solenoid.
Page | 36
Magnetic Range Sensing with Hall-effect sensor
Properly measuring the distance of a magnet is the key component in
stable levitation and vertical control. Since the magnetic field drops off
as the cube of the distance from the source, it is challenging to convert
the strength of the magnetic field to the vertical position of a magnet.
To linearize signals sensed by the hall-effect sensor, we developed the
two-stepgain logarithmic amplifier. It logarithmically amplifies the
signal with two different gains, based on whether the signal exceeded a
threshold voltage value.
Designing ZeroN Object
We used a spherical dipole magnet as a levitating object. Due to the
geometry of magnetic field, users can move the spherical dipole
magnet while still keeping it suspended, but it falls when they tilt it. To enable input of a user’s desired orientation, a loose
plastic layer is added to cover the magnet as illustrated in figure 7.
Stereo Tracking of 3D position and 1D orientation
We used two modified Sony PS3Eyecams1
to track the 3D position of ZeroN using computer vision techniques with a pair of
infrared images as in figure 8. To measure orientation, we applied a stripe of retro-reflective tape to the surface of ZeroN. We
chose this approach because it was both technically simple and robust, and didn’t add significant weight to ZeroN: an
important factorin a levitating object.
Page | 37
Determining Modes
A challenge in emulating the ‘anti-gravity space’ is to determine if ZeroN is being moved by a user, oris naturally wobbling.
Currently, ZeroN sways laterally when actuated, and the system can misinterpret this movement foruser input and continue to
update a new stable point of suspension. This causes ZeroN to drift
around. To resolve this issue, we classify three modes of operation
(idle, grabbed, grabbed forlong) based on whether, and forhow
long, the user is holding the object. In the idle mode, when ZeroN is
not grabbed by the user, the control system acts to keep the
position or trajectory of the levitating object as programmed by the
computer. When grabbed by the user, the system updates the
stable position based on the current position specified by the users,
such that the users can release their hands without dropping the
object. If the user is grabbing the object for longer than 2.5s, it
starts specific functions such as record and play back.
Page | 38
While stereo IR cameras were useful in obtaining the accurate
position and orientation of the object using retro reflective
tape, it was challenging to distinguish users’ hands from
background orobjects. We chose to use an additional depth
camera Microsoft Kinect to detect the user’s hand pose with
computer vision techniques built on top of
Open-source libraries. Our software extracts binary contours
of objects at a predefined depth range and finds the blob
created between the user’s hands and the levitated object.
Calibration of 3D Sensing, Projection, and Actuation
To ensure real time interaction, careful calibration between
cameras, projectors and 3D actuation system is essential in our
implementation. After finding correspondence between two
cameras with checkerboard patterns, we register cameras with
the coordinate of interactive space. We position the ZeroN
object at eachof these fixed four non-coplanarpoints. Similarly,
to register each projector to real-world coordinates, we match
the ZeroN positioned at the four non-coplanar calibration points
and move a projected image of a circle towards the ZeroN.
When the circular image is overlaid on the ZeroN, we increase
Page | 39
or decrease the size of the circle image so that it matches the size of ZeroN.
This data is used to find two homogenous matrices that transform raw
camera coordinates to real world coordinates of the interactive space, and
the real coordinates to x, y position and the diameter of the circle. We have
not made much effort to optimally determine the focal plane of the
projected image - focusing the projectors roughly in the middle of the
interactive space is sufficient.
Engineering ‘Anti-Gravity’ Space
These various sensing and actuationtechniques coordinate to create
a seamless ‘anti-gravity’ I/O space. When the user grabs the ZeroN
and places it within the defined space of the system, the system
tracks the 3D position of the object, and determines if the user’s hand
is grabbing ZeroN. The electromagnet is then carried to the 2D
position of ZeroN by the 2-axis actuators, and is programmed to reset
a new stable point of suspension at a sensed vertical position. As a
result, this system creates what we will call a small ‘antigravity’ space,
wherein people can place an object in a volume seemingly
Page | 40
unconstrained by gravity. The user’s hands and other non-magnetic materials do not affect levitation.
Since the levitation controller acts to keep the floating object at a given height, users experience the sensation of an invisible
but very tangible mechanical connection between the levitated magnet and a fixed point in space that can be continually
updated.
3D POINT AND PATH DISPLAY
ZeroN serves as a dynamic tangible representation of a 3D coordinate, without being tethered by mechanical armature. 3D
Position of ZeroN may be updated upon computer commands to present dynamic movements or curved lines in the 3D space
such as flight paths of the airplane or orbits of planets. Graphical images or icons may be projected upon the white surface of
ZeroN levitating, such as a camera or the pattern of a planet. These graphical images can be animated or ‘tilted’ to display
change of orientation. This complements the limitation of current magnetic actuation system that can only control the 3D
position of a magnet, but has little control on its orientation.
Page | 41
INTERACTION
We have developed a 3D, tangible interaction language that closely resembles how people interact with physical objects on a
2D surface – put, move, rotate, and drag, which now serves as a standard metaphor, widely used in many interaction design
domains including GUIs and tabletop interfaces. We list the vocabulary of our interaction language (figure 12). Place
One can place ZeroN in the air, suspending it at an arbitrary 3D position within the interactive space.
Translate
Users can also move ZeroN to another position in the antigravity space, without disturbing its ability to levitate.
Rotate
When users rotate the plastic shell covering the spherical magnet, digital images projected onthe ZeroN will rotate
accordingly.
Hold
Users can hold or block ZeroN to impede computer actuation. This can be interpreted as computational constraint as also
shown in PICO.
Long Hold
We implemented a long-hold gesture that can be used to initiate a specific function. For example, in a video recording
application, we might have an interaction where users could hold the ZeroN forlonger than 2.5 seconds to initiate recording,
and release to enter “play-back” mode.
Attaching / Detaching Digital Information to the ZeroN
We borrowed a gesture for attaching / detaching digital items to tabletop interfaces it is challenging to interact with multiple
information clusters, since the current system can only levitate one object. For instance, in the application of urban planning
simulation, users might first want to use ZeroN as the Sun to control lighting, and then as a camera to render the scene. Users
Page | 42
can attachZeroN to a digital item projected on the tabletop surface on the ground, just by moving the ZeroN close to the
digital item to be bind with. To unbind a digital item froma ZeroN, users can use shaking gestures or remove the ZeroN from
the interactive space.
Interaction with Digital Shadows
We aim to seamlessly incorporate ZeroN into existing tabletop tangible
interfaces. One of the challenges is to provide users with a semantic link
between the levitated object and tabletop tangible interfaces on the 2D
surface. Since ZeroN is not physically in contact with the tabletop system, it
is hard to recognize the relative position of the ZeroN to the other objects
placed on the ground. We designed an interactive digital shadow to provide
users with visible links between ZeroN and other part of the tabletop
tangible interfaces. For instance, levitating ZeroN itself cancast its digital
shadow whose size is mapped to the height of the object (see figure 13). For
the time being, however, this feature is not yet incorporated in the
application scenarios.
APPLICATIONS AND USER REFLECTION
We explore the previously described interaction techniques in context of several categories of applications described below.
While the physics and architecture simulation allows users to begin using ZeroN to address a practical problem, the
prototyping animation and Zero-pong applications are proof of concepts to demonstrate the interactions one might have with
ZeroN.
Page | 43
Physics Simulation and Education
ZeroN can serve as a tangible physics simulator by displaying and actuating physical objects under computationally controlled
physical conditions. As a result, dynamic computer simulation can turn into tangible reality, which had previously been possible
only in the virtual world. More importantly, users can interrupt or affect the
simulation process by blocking actuationwith their hands or by introducing other
physical objects in the ZeroN space.
Understanding Kepler’s Law
In this application, users can simulate a planet's movement in the solarsystem
by placing at the simulation’s center, a static object that represents the center of
mass as the Sun,
Around which the ZeroN will revolve like a planet. Users canchange the distance between the Sun and the planet, which will
make the ZeroN snap to another orbit. Resulting changes canbe observed and felt in motion and speed. Digital projection
shows the area that a line joining a ZeroN and the Sun sweeps out during a certain period of time, confirming Kepler's
2nd law (see figure 15).
Page | 44
Three-Body Problem
In this application, users can generate a gravity field by introducing
multiple passive objects that represent fixed centers of gravity. A
placed ZeroN next to the object will orbit around based on the result of
the 3–body simulation. Users can add or change the gravitational field
by simply placing more passive objects, which can be identified by a
tabletop interface setup (see figure 15).
Architectural Planning
While there has been much research exploring tangible interfaces in the space of architectural planning, some of the
essential components, such as lights or cameras, cannot be represented as a tangible object that can be directly manipulated.
For instance, up system allows users to directly control the arrangement of physical buildings, but lighting can only be
controlled by rotating a separate time dial. While it is not our goal to stress that direct manipulation outperforms indirect
manipulation, there are certainly various scenarios where having
direct manipulation of tangible representation is important. We
developed two applications forgathering users’ feedback.
Lighting Control
We developed an application forcontrolling external
architectural lighting in which users can grab and place a Sun in
the air to control the digital shadow cast by physical models
Page | 45
On the tabletop surface. The computer can simulate changes in the position of the lighting, such as chances over the day, and
the representative Sun will be actuated to reflect these changes.
Camera Path Control
Users can create 3D camera paths for rendering virtual scenes using
ZeroN as a camera. Attaching ZeroN to the camera icon displayed
on the surface turns the sun into a camera object. Users can then
hold the ZeroN fora number of seconds in one position to initiate a
recording interaction. When users draw a 3D path in the air and release
the ZeroN, the camera is sent back to initial position and then
Moved along the previously recorded 3D trajectory. On an additional
screen, users can see the virtual scene of their model taken by the
camera’s perspective in real time. If the user wants to edit this path,
they canintervene with the camera’s path and start from the exact
current position of the camera to redraw another path.
3D Motion Prototyping
Creating and editing 3D motion for animation is a long and complex
process with conventional interfaces, requiring expert knowledge of
the software,even for simple prototyping. With record and play-back
interaction, users can easily prototype the 3D movement of an object and watch it playing back in the real world. The motion
can possibly be mapped to a 3D digital character moving accordingly on the screen with dynamic virtual environments. As a
result, users can not only see, but also feel the 3D motion of the object they created. They can go through this interaction
through a simple series of gestures; long-hold and release.
Page | 46
Entertainment: Tangible 3D Pong in the physical space
Being able to arbitrarily program the movement of a physical object, ZeroN can be used for digital entertainment. We partially
implemented and demonstrated a Tangible 3D Pong application with ZeroN as a Ping-Pong ball. In this scenario, users can
play computer-enhanced pong game with a floating ball whose physical behavior is computationally programmed
. Users canhit or block the movement of ZeroN to change the
trajectory of the Ping-Pong ball. They canadd computational
constraints in this game by Placing a physical object in this
interactive space as in figure 18. While this partially
implemented application demonstrates interesting challenges,
it suggests a new potential infrastructure for computer
entertainment, where human and computation embodied in
the motion of physical objects are in the tight loopof
interaction.
INITIAL REFLECTION AND DISCUSSION
We demonstrated our prototype to users to gather initial feedback and recruited several participants to try out each
application. The purpose of this study was to evaluate our design, rather than to exemplify the practicality of each
application. We further discuss several interesting unique issues that we discovered through this observation.
Leaving a Physical Object in the Air
In the camera path control application, users appreciated the fact that they could leave a physical camera object in
the air and review and edit the trajectory in a tangible way. There were commented that latency in the electromagnet’s
stability update (between users’ displacement and the electromagnet’s update of the stable position) creates confusion. In
Page | 47
the lighting control application, a user commented that they could better discuss with a collaboratorusing a system that
enables the object to be held in a position in the air. Many of participants also pointed out the issue of lateral oscillation, which
we are working to improve.
Interaction Legibility
In the physics education application, a several users commented that not being able to see physical relationship between
‘planets’ make them harder to expect how to interact with this system, or what would happen if they touch and move the
parts. Being able to actuate an object without mechanical linkages in free space allows a more degrees of freedom of
movements and allows access from all orientations. On the other hand, this decreases the legibility of interaction by
making the mechanical linkages invisible. In contrast a historical orrery (figure 19) machine where the movement of ‘planets’
are constrained by its mechanical connections, users can immediately understand the freedom of movement that the
mechanical structure affords.
One of the possible solutions to compensate this loss of
legibility is to rely on graphical projection or subtle movements of
the objects to indicate the constraints of the movement.
Carefully choosing an application where the gain of freedom
outweighs the loss of legibility was our criteria for choosing
application scenarios.
TECHNICAL EVALUATION
Maximum Levitation Range
The maximum range of magnetic levitation is limited by several factors. While our circuits canhandle higher currents than
currently used, an increased maximum range is limited by the heat generated in the coils. We used a 24V power supply, from
Page | 48
which we drew 2A. Above that power, the heat generated by the electromagnet begins to melt its form core. The current
prototype can levitate up to 7.4 cm measured from the bottom of the hall-effect sensor to the center of our spherical
magnet. To scale up the system, a cooling system needs to be added on top of the coil.
Speed of actuation
The motor used in the system can carry the electromagnet with a maximum velocity of 30.5cm/s and top accelerationof
6.1m/s. The dynamic response of ZeroN’s inertia is the main limit on acceleration. Because of the response properties of this
second-order system (e.g. the electromagnet and ZeroN), larger accelerations fail to overcome ZeroN’s inertia and would lead
to ZeroN being dropped. The result of experiments measuring maximum intertie shows 3.9m/s of the lateral accelerationcan
drop the ZeroN.
Resolution and Oscillation
If we frame our system as a 3D volumetric (physical) display in which only one cluster of voxels can be turned on at a time, we
need to define the resolution of the system. Our 2D linear actuators can position the electromagnet at 250,000 different
positions on each axis, and there is also no theoretical limit to the resolution of vertical control. However, vertical and
horizontal oscillation of the levitated object makes it difficult to define this as the true system resolution. In the current
prototype, ZeroN oscillates within 1.4 cm horizontally and 0.2 cm vertically around the set position when moved. We call the
regions swept by oscillation“blurry” with “focused” area at its center.
Robustness of Magnetic Levitation
Robust levitation is a key factorforproviding users with the sensation of an invisible mechanical connection with a fixed point
in the air. We have conducted a series of experiments to measure how much strength canbe posted on ZeroN without
displacing it from a stable point of suspension. For these experiments, we attached the levitated magnet to a linear spring
scale that canmeasure up to 1.2N of weight and pulled it towards the direction of 0° (horizontal), 15° 30°, 45°, 60°, 75°, and
90° (vertical). The average of 5 times’ measurements is plotted in figure 20.
Page | 49
TECHNICAL LIMITATION AND FUTURE WORK
ateral oscillationwas reported as the biggest issue to correct in our application scenarios. We plan to implement
satellite coils around the main electromagnet that can impose a magnetic force in a lateral direction to
eliminate lateral wiggling and provide better haptic feedback. Another limitation with the current prototype is the limited
vertical actuation range. This can be addressed by carefully designing the magnetic controller with better range sensing
capabilities and choosing a geometry forthe electromagnet that increases the range without overheating the coil. A desirable
extension is to use magnetic sensing technology with an array of hall-effect sensors in 3D tracking which would have provided
more robust and low-latency object tracking without occlusion. We encountered difficulties using hall-effect sensor arrays in
conjunction with our magnetic levitation system because of the strong magnetic field distortions caused by our
electromagnets. We believe that this problem can be overcome in the future by subtracting magnetic field generated by
electromagnets through precise calibration of dynamic magnetic field. But to avoid these difficulties in the short term, we
added a vision tracking to our system prototype despite that this limits the hand input to areas that do not occlude the view of
the camera.
Levitating Multiple Objects
While the current research was focused on identifying challenges in interacting with one levitated object, it is natural to
imagine interaction with multiple objects in mid-air. A scalable solution will be using an array of solenoids. Under such setup, a
magnet can be positioned at or moved to an arbitrary position between the centers of two or more solenoids by passing the
necessary amount of current to eachsolenoid. It is analogous to pulling and hanging a ball with multiple invisible magnetic
strings connected to the center of solenoids. However, it will be challenging to position two or more magnets within a small
proximity due to magnetic field interference, or to position them on similar x, y coordinates. One approach to tackle this issue
might come from levitating switchable magnets, turning them on and off to time-multiplex the influence that eachobject
receives from the solenoids. We would like to leave this concept for future research.
L
Page | 50
CONCLUSION
his presents the concept of 3D mid-air tangible interaction. To explore this concept, we developed a magnetic control
system that can levitate and actuate a permanent magnet in a three dimensional space, combined with an optical
tracking and display system that projects images on the levitating object. We extend interaction scenarios that were
constrained to 2D tabletop interaction to mid-air space, and developed novel interaction techniques. Raising tabletop tangible
interfaces to 3D space above the surface opens up many opportunities and leaves many interaction design challenges. The
focus of to explore these interaction modalities and although the current applications demonstrate many challenges, we are
encouraged by what is enabled by the current system and will continue to develop scalable mid-air tangible interfaces. We also
envision that ZeroN could be extended forthe manipulation of holographic displays. When 3D display technologies mature in
the future, levitated objects can be directly coupled with holographic images projected in the air. We believe that ZeroN is the
beginning of an exploration of this space within the larger field of future interaction design. One could imagine interfaces
where discrete objects become like 3D pixels, allowing users to create and manipulate forms with their hands.
Inventor of ZeroN
Jinha Lee andRehmi Post
MIT Media Laboratory 75 Amherst St. Cambridge, MA 02139 USA
Advisor: Hiroshi Ishii
MIT Media Laboratory 75 Amherst St. Cambridge, MA 02139 USA
T
Page | 51
RECAPT about all shortly
Tools are
 Beyond – Collapsible Tools and Gestures forComputational Design
 SpaceTop: Integrating 2D and Spatial 3D Interactions in a See-through Desktop Environment
WYSIWYF –“what you see is what you feel”
ZeroN: Mid-Air Tangible Interaction Enabled by Computer Controlled Magnetic Levitation
Disadvantage of all tools
I. After released of all product cost big factorto purchase this product.
II. Signal user interface can’t use multiple use access all tools.
III. Not potable for carry all tools, only Beyond relished light Wight but this tools have one Wight full camera.
IV. Not User friendly only control by well knows people
V. Need Popper environment this is only test in lab.
VI. Only 18 or above age people access this product [this age provide by lab]
“Today, we started by talking about the boundary, but if we remove
this boundary, the only boundary left is our imagination.”
Page | 52
REFERENCE
Reference of Beyond – Collapsible Tools and Gestures for Computational Design
[1] Ishii, H. and Ullmer, B. Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms. Proceedings of
CHI’97, ACM Press, 1997. 234-241.
[2] Bae, S., Balakrishnan, B. and Karan Singh, "ILoveSketch: Asnatural-as-possible sketching system for creating 3D curve
models," ACM Symposium on User Interface Software and Technology 2008.
[3] Oblong G-speak. http://www.oblong.com
[4] Koike, H., Xinlei, C.,Nakanishi, Y., Oka, K., and Sato, Y.
Twohanded drawing on augmented desk. In CHI ’02 extended abstracts on Human factors in computing systems, New York,
NY, USA. 2002, 760–761,
[5] P. Mistry, K. Sekiya, A. Bradshaw. Inktuitive: An Intuitive Physical Design Workspace. In the Proceedings of 4th
International Conference on Intelligent Environments (IE08). 2008.
[6] Ishii,H.,Underkoffler,J.,Chak,D.,Piper,B.,Ben- Joseph, E.,Yeung, L., Kanji, Z. Augmented urban planning workbench:
overlaying drawings, physical models and digital simulation. In Proceedings of ISMAR. 2002.
[7] Yokokohji, Y.; Hollis, R.L.; Kanade, T., Vision-based visual/haptic registration forWYSIWYF display Intelligent Robots and
Systems apos, 1996.
[8] Kamuro, S., Minamizawa, K., Kawakami, N., Tachi, S., Pen De Touch, International Conference on Computer Graphics and
Interactive Techniques archive SIGGRAPH '09: Posters, 2009.
[9] Inami, M.; Kawakami, N.; Sekiguchi, D.; Yanagida, Y.; Maeda, T.; Tachi, S. Visuo-haptic display using head-mounted
projector. Virtual Reality, 2000. Proceedings. IEEE,Vol, Issue, 2000. 233– 240.
Page | 53
[10] Plesniak, W.J.; Pappu, R.S.; Benton, S.A. Haptic holography: a primitive computational plastic. Proceedings of the IEEE
Volume 91, Issue 9, Sept. 2003. 1443 – 1456.
[11] FRONTDESIGN Sketch Furniture http://www.youtube.com/watch?v=8zP1em1dg5k
[12] Igarashi, T., Matsuoka, S., and Tanaka, H. Teddy: a sketching interface for 3D freeformdesign. In Proc. of the 26th
Annual Conference on Computer Graphics and interactive Techniques ACM Press/Addison-Wesley Publishing Co., New York,
NY, 1999. 409416.
[13] Igarashi, T. and Hughes, J. F. A suggestive interface for3D drawing. UIST,2001. 173-181.
[14] Sensable Technologies PhanTOM. http://www.sensable.com/haptic-phantom-desktop.htm
[15] Lee, J., Head Tracking Method Using Wii Remote. http://johnnylee.net/projects/wii/
[16] Aliakseyeu,D. ,Martens,J. ,Rauterberg,M. A computer support tool forthe early stages of architectural design.
Interacting with Computers, v.18 n.4, July, 2006. 528-555.
[17] Wang, Y., Biderman, A., Piper, Ben., Ratti, C., Ishii, Tangible User Interfaces (TUIs): A Novel Paradigm forGIS Transactions
in GIS, 2004. 407–421.
[18] Leightinger, D., Kumpft, Adam., Ishii, H., Relief http://tangible.media.mit.edu/project.php?recid=132
Reference of SpaceTop: Integrating 2D and Spatial 3D Interactions in a See-through
Desktop Environment
[1] Agarawala, A., and Balakrishnan, R., 2006. Keepin' it real: pushing the desktop metaphor with physics, piles and the pen.
CHI '06, 1283–1292.
[2] Benko, H., and Feiner, S. 2006. Balloonselection: A multi-finger technique foraccurate low-fatigue 3d selections. 3D UI
'06, 79–86.
Page | 54
[3] Benko, H., Ishak, E., and Feiner, S. 2005. Cross-dimensional gestural interaction techniques forhybrid immersive
environments. VR ' 05, 209– 216.
[4] Hachet, M., Bossavit, B., Cohé, A., and Rivière, J., 2011. Toucheo: multitouch and stereo combined in a seamless
workspace. UIST '11, 587– 592.
[5] Hilliges, O., Kim, D., Izadi, S., Weiss, M., and Wilson, A. 2012. HoloDesk: direct 3d interactions with a situated see-
through display. CHI '12, 1283–1292.
[6] Olwal, A., Lindfors, C., Gustafsson, J.,Kjellberg, T., and Mattsson, L. 2005. ASTOR: An autostereoscopic optical see-
through augmented reality system. ISMAR '05, 24–27.
[7] Robertson, G., Czerwinski, M., Larson, K., Robbins, D., Thiel, D., and Dantzich, M. 1998. Data mountain: using spatial
memory fordocument management. UIST '98, 153–162.
[8] Schmandt, C. 1983. Spatial input/display correspondence in a stereoscopic computer graphic work station. SIGGRAPH
'83, 253–261.
[9] Treskunov, A., Kim, S. W., Marti, S. 2011. Range Camera for Simple behind Display Interaction. IAPR MVA '11, 160–163.
[10] Wilson, A. Using a depth camera as a touch sensor. ITS '10, 69–72.
[11] Wilson, A. 2006. Robust computer vision-based detection of pinching for one and two-handed gesture input. UIST '06,
255–258.
Page | 55
Reference of WYSIWYF –“what you see is what you feel”
[1] http://www.ted.com/speakers/jinha_lee
[2] http://tangible.media.mit.edu/
[3] http://vimeo.com/60619666
[4] http://www.asdfnews.com/#/
[5] http://leejinha.com/WYCIWYW
[6] http://leejinha.com/ABOUT
a. Benko, H., Ishak, E., and Feiner, S. 2005. Cross-dimensional gestural interaction techniques forhybrid immersive
environments. VR ' 05, 209– 216.
[7] Hachet, M., Bossavit, B., Cohé, A., and Rivière, J., 2011. Toucheo: multitouch and stereo combined in a seamless
workspace. UIST '11, 587– 592.
Reference of ZeroN: Mid-Air Tangible Interaction Enabled by Computer Controlled Magnetic
Levitation
1. Baudisch, P., Becker, T., and Rudeck, F. 2010. Lumino: tangible building blocks based on glass fiber bundles. In ACM
SIGGRAPH 2010 Emerging Technologies (SIGGRAPH '10). ACM, New York, NY, USA, , Article 16 , 1 pages.
2. Berkelman, P. J., Butler, Z. J., and Hollis, R. L., "Design of a Hemispherical Magnetic Levitation Haptic Interface Device,"
1996 ASME IMECE,Atlanta, DSC-Vol. 58, pp. 483-488.
3. Grossman, T. and Balakrishnan, R. 2006. The design and evaluation of selection techniques for3D volumetric displays. In
ACM UIST '06. 3-12.
4. Grossman, T. and Wigdor, D. Going deeper: a taxonomy of 3D on the tabletop. In IEEE Tabletop '07. 2007. p. 137-144.
Page | 56
5. Hilliges, O., Izadi, S., Wilson, A. D., Hodges, S., Garcia-Mendoza, A., and
Butz., A., 2009. Interactions in the air: adding further depth to interactive tab letops. In Proceedings of the 22nd annual ACM
symposium on User interface software and technology (UIST '09). ACM, New York, NY 139-148.
6. Hollis, R. L. and Salcudean, S. E. 1993. Lorentz levitation technology: a new approach to fine motion robotics,
teleoperation, haptic interfaces, and vibration isolation, In Proc. 6th Int’l Symposium on Robotics Research, October 2-5 1993.
7. Ishii, H. and Ullmer, B. 1997. Tangible bits: towards seamless interfaces between people, bits and atoms. In Proceedings
of the CHI'97. ACM, New York, NY, 234-241.
8. Iwata,H., Yano, H., Nakaizumi, F., and Kawamura, R. 2001. Project FEELEX
9. Jorda, S. 2010. The reactable: tangible and tabletop music performance. InProceedings of the 28th of the international
conference extended abstracts on Human factors in computing systems (CHI EA '10). ACM, New York, NY, USA, 2989-2994.
10. Massie, T. H. and Salisbury, K. "The PHANTOM Haptic Interface: A Device for Probing Virtual Objects." Proceedings of the
ASME Winter Annual Meeting, Symposium on Haptic Interfaces forVirtual Environment and TeleoperatorSystems,1994.
11. Pangaro, G., Maynes-Aminzade, D., and Ishii, H. 2002. The actuatedworkbench: computer-controlled actuation in
tabletop tangible interfaces. In Proceedings of the 15th annual ACM symposium on User interface software and technology
(UIST '02). ACM, New York, NY, USA, 181-190.
12. Patten, J., Ishii, H., Hines, J., and Pangaro, G. 2001. Sensetable: a wireless object tracking platformfor tangible user
interfaces. In CHI '01. ACM, New York, NY, 253-260.
13. Patten, J., Recht, B., and Ishii, H. 2006. Interactiontechniques formusical performance with tabletop tangible interfaces.
In Proceedings of the 2006 ACM SIGCHI international conference on Advances in computer entertainment technology (ACE
'06). ACM, New York, NY, USA,Article 27.
Page | 57
14. Patten, J. and Ishii, H. 2007. Mechanical constraints as computational constraints in tabletop tangible interfaces. In
Proceedings of the SIGCHI conference on Human factors in computing systems (CHI '07). ACM, New York, NY, USA, 809-818.
15. Piper, B., Ratti, C.,and Ishii, H., Illuminating Clay: A 3-D Tangible Interface for Landscape Analysis, Proceedings of CHI
2002, 355-364.
16. Plesniak, W. J., "Haptic holography: an early computational plastic", Ph.D. Thesis, Program in Media Arts and Sciences,
Massachusetts Institute of Technology, June 2001.
17. Poupyrev, I.,Nashida, T., Maruyama, S., Rekimoto, J., and Yamaji, Y. 2004. Lumen: interactive visual and shape display
for calm computing. In ACM SIGGRAPH 2004 Emerging technologies (SIGGRAPH '04), Heather ElliottFamularo (Ed.). ACM, New
York, NY, USA, 17.
18. Poupyrev, I.,Nashida, T., and Okabe, M. 2007. Actuation and tangible user interfaces: the Vaucansonduck, robots, and
shape displays. In Proceedings of the 1st international conference on Tangible and embedded interaction (TEI '07). ACM, New
York, NY.
19. Rekimoto, J., Ullmer, B., and Oba, H. 2001. DataTiles: a modular platform formixed physical and graphical interactions.
In Proceedings of the SIGCHI conference on Human factors in computing systems (CHI '01). ACM, New York, NY, USA, 269-276.
20. Rosenfeld, D., Zawadzki, M., Sudol, J., and Perlin, K. Physical objects as bidirectional user interface elements. IEEE
Computer Graphics and Applications, 24(1):44–49, 2004.
21. Sugimoto, M., Kagotani, G., Kojima, M., Nii, H., Nakamura, A., and Inami, M. 2005. Augmented coliseum: display-based
computing foraugmented reality inspiration computing robot. In ACM SIGGRAPH 2005 Emerging technologies (SIGGRAPH '05),
Donna Cox (Ed.). ACM, New York, NY, USA, Article 1.
22. Underkoffler, J. and Ishii, H. 1999. Urp: a luminous-tangible workbench for urban planning and design. In CHI '99. ACM,
New York, NY, 386-393.
Page | 58
23. Weiss, M., Schwarz, F., Jakubowski, S., and Borchers, J. 2010. Madgets: actuating widgets on interactive tabletops. In
Proceedings of the 23nd annual ACM symposium on User interface software and technology (UIST '10). ACM, New York, NY,
293-302.
24. Art+com’ Kinetic Sculpture:
http://www.artcom.de/en/projects/project/detail/kinetic-sculpture/

More Related Content

What's hot (20)

How AR/VRcan be used in urban planning and interior designing
How AR/VRcan be used in urban planning and interior designingHow AR/VRcan be used in urban planning and interior designing
How AR/VRcan be used in urban planning and interior designing
 
Virtual and additional reality
Virtual and additional realityVirtual and additional reality
Virtual and additional reality
 
3 d internet final
3 d internet final3 d internet final
3 d internet final
 
towards 3d internet
towards 3d internettowards 3d internet
towards 3d internet
 
3d internet
3d internet3d internet
3d internet
 
Digital design
Digital designDigital design
Digital design
 
3d Internet(mmk)
3d Internet(mmk)3d Internet(mmk)
3d Internet(mmk)
 
3d internet ppt
3d internet ppt3d internet ppt
3d internet ppt
 
Human Depth Perception
Human Depth PerceptionHuman Depth Perception
Human Depth Perception
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
 
3 D Internet
3 D Internet3 D Internet
3 D Internet
 
3D Internet Seminar PPT - OECLIB
3D Internet Seminar PPT - OECLIB3D Internet Seminar PPT - OECLIB
3D Internet Seminar PPT - OECLIB
 
Virtual Reality
Virtual RealityVirtual Reality
Virtual Reality
 
augmented reality
augmented realityaugmented reality
augmented reality
 
3d internet
3d internet3d internet
3d internet
 
Virtual Reality(VR)
Virtual Reality(VR)Virtual Reality(VR)
Virtual Reality(VR)
 
3D INTERNET
3D INTERNET3D INTERNET
3D INTERNET
 
3D Virtual Worlds
3D Virtual Worlds3D Virtual Worlds
3D Virtual Worlds
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
 
Ms2
Ms2Ms2
Ms2
 

Similar to Grab and Manipulate Pixels with Collapsible Tools

An analysis of desktop control and information retrieval from the internet us...
An analysis of desktop control and information retrieval from the internet us...An analysis of desktop control and information retrieval from the internet us...
An analysis of desktop control and information retrieval from the internet us...eSAT Journals
 
An analysis of desktop control and information retrieval from the internet us...
An analysis of desktop control and information retrieval from the internet us...An analysis of desktop control and information retrieval from the internet us...
An analysis of desktop control and information retrieval from the internet us...eSAT Publishing House
 
Human Computer Interaction Based HEMD Using Hand Gesture
Human Computer Interaction Based HEMD Using Hand GestureHuman Computer Interaction Based HEMD Using Hand Gesture
Human Computer Interaction Based HEMD Using Hand GestureIJAEMSJORNAL
 
Imagining a Physical Future for Digital Journalism
Imagining a Physical Future for Digital JournalismImagining a Physical Future for Digital Journalism
Imagining a Physical Future for Digital JournalismDataJournalismUK
 
COMUTER GRAPHICS NOTES
COMUTER GRAPHICS NOTESCOMUTER GRAPHICS NOTES
COMUTER GRAPHICS NOTESho58
 
IRJET- 3D Drawing with Augmented Reality
IRJET- 3D Drawing with Augmented RealityIRJET- 3D Drawing with Augmented Reality
IRJET- 3D Drawing with Augmented RealityIRJET Journal
 
Virtual Smart Phones
Virtual Smart PhonesVirtual Smart Phones
Virtual Smart PhonesIRJET Journal
 
Surfacecomputerppt 130813063644-phpapp02
Surfacecomputerppt 130813063644-phpapp02Surfacecomputerppt 130813063644-phpapp02
Surfacecomputerppt 130813063644-phpapp02Ankit Singh
 
sixth sense presentation
sixth sense presentationsixth sense presentation
sixth sense presentationAayush Agrawal
 
augmented reality paper presentation
augmented reality paper presentationaugmented reality paper presentation
augmented reality paper presentationVaibhav Mehta
 
Implementation of Interactive Augmented Reality in 3D Assembly Design Present...
Implementation of Interactive Augmented Reality in 3D Assembly Design Present...Implementation of Interactive Augmented Reality in 3D Assembly Design Present...
Implementation of Interactive Augmented Reality in 3D Assembly Design Present...AIRCC Publishing Corporation
 
IMPLEMENTATION OF INTERACTIVE AUGMENTED REALITY IN 3D ASSEMBLY DESIGN PRESENT...
IMPLEMENTATION OF INTERACTIVE AUGMENTED REALITY IN 3D ASSEMBLY DESIGN PRESENT...IMPLEMENTATION OF INTERACTIVE AUGMENTED REALITY IN 3D ASSEMBLY DESIGN PRESENT...
IMPLEMENTATION OF INTERACTIVE AUGMENTED REALITY IN 3D ASSEMBLY DESIGN PRESENT...ijcsit
 
IMPLEMENTATION OF INTERACTIVE AUGMENTED REALITY IN 3D ASSEMBLY DESIGN PRESENT...
IMPLEMENTATION OF INTERACTIVE AUGMENTED REALITY IN 3D ASSEMBLY DESIGN PRESENT...IMPLEMENTATION OF INTERACTIVE AUGMENTED REALITY IN 3D ASSEMBLY DESIGN PRESENT...
IMPLEMENTATION OF INTERACTIVE AUGMENTED REALITY IN 3D ASSEMBLY DESIGN PRESENT...AIRCC Publishing Corporation
 
sixth sense technology 2014 ,by Richard Des Nieves,Bengaluru,kar,India.
sixth sense technology 2014 ,by Richard Des Nieves,Bengaluru,kar,India.sixth sense technology 2014 ,by Richard Des Nieves,Bengaluru,kar,India.
sixth sense technology 2014 ,by Richard Des Nieves,Bengaluru,kar,India.Richard Des Nieves M
 
microsoft Surface computer
microsoft Surface computer microsoft Surface computer
microsoft Surface computer Ashish Singh
 
microsoft Surface computer
microsoft Surface computermicrosoft Surface computer
microsoft Surface computerAshish Singh
 

Similar to Grab and Manipulate Pixels with Collapsible Tools (20)

An analysis of desktop control and information retrieval from the internet us...
An analysis of desktop control and information retrieval from the internet us...An analysis of desktop control and information retrieval from the internet us...
An analysis of desktop control and information retrieval from the internet us...
 
An analysis of desktop control and information retrieval from the internet us...
An analysis of desktop control and information retrieval from the internet us...An analysis of desktop control and information retrieval from the internet us...
An analysis of desktop control and information retrieval from the internet us...
 
Human Computer Interaction Based HEMD Using Hand Gesture
Human Computer Interaction Based HEMD Using Hand GestureHuman Computer Interaction Based HEMD Using Hand Gesture
Human Computer Interaction Based HEMD Using Hand Gesture
 
Imagining a Physical Future for Digital Journalism
Imagining a Physical Future for Digital JournalismImagining a Physical Future for Digital Journalism
Imagining a Physical Future for Digital Journalism
 
Cg notes
Cg notesCg notes
Cg notes
 
COMUTER GRAPHICS NOTES
COMUTER GRAPHICS NOTESCOMUTER GRAPHICS NOTES
COMUTER GRAPHICS NOTES
 
IRJET- 3D Drawing with Augmented Reality
IRJET- 3D Drawing with Augmented RealityIRJET- 3D Drawing with Augmented Reality
IRJET- 3D Drawing with Augmented Reality
 
14 585
14 58514 585
14 585
 
Virtual Smart Phones
Virtual Smart PhonesVirtual Smart Phones
Virtual Smart Phones
 
Surfacecomputerppt 130813063644-phpapp02
Surfacecomputerppt 130813063644-phpapp02Surfacecomputerppt 130813063644-phpapp02
Surfacecomputerppt 130813063644-phpapp02
 
sixth sense presentation
sixth sense presentationsixth sense presentation
sixth sense presentation
 
augmented reality paper presentation
augmented reality paper presentationaugmented reality paper presentation
augmented reality paper presentation
 
Implementation of Interactive Augmented Reality in 3D Assembly Design Present...
Implementation of Interactive Augmented Reality in 3D Assembly Design Present...Implementation of Interactive Augmented Reality in 3D Assembly Design Present...
Implementation of Interactive Augmented Reality in 3D Assembly Design Present...
 
IMPLEMENTATION OF INTERACTIVE AUGMENTED REALITY IN 3D ASSEMBLY DESIGN PRESENT...
IMPLEMENTATION OF INTERACTIVE AUGMENTED REALITY IN 3D ASSEMBLY DESIGN PRESENT...IMPLEMENTATION OF INTERACTIVE AUGMENTED REALITY IN 3D ASSEMBLY DESIGN PRESENT...
IMPLEMENTATION OF INTERACTIVE AUGMENTED REALITY IN 3D ASSEMBLY DESIGN PRESENT...
 
IMPLEMENTATION OF INTERACTIVE AUGMENTED REALITY IN 3D ASSEMBLY DESIGN PRESENT...
IMPLEMENTATION OF INTERACTIVE AUGMENTED REALITY IN 3D ASSEMBLY DESIGN PRESENT...IMPLEMENTATION OF INTERACTIVE AUGMENTED REALITY IN 3D ASSEMBLY DESIGN PRESENT...
IMPLEMENTATION OF INTERACTIVE AUGMENTED REALITY IN 3D ASSEMBLY DESIGN PRESENT...
 
sixth sense technology 2014 ,by Richard Des Nieves,Bengaluru,kar,India.
sixth sense technology 2014 ,by Richard Des Nieves,Bengaluru,kar,India.sixth sense technology 2014 ,by Richard Des Nieves,Bengaluru,kar,India.
sixth sense technology 2014 ,by Richard Des Nieves,Bengaluru,kar,India.
 
Show me
Show meShow me
Show me
 
Sixth Sense Technology
Sixth Sense Technology Sixth Sense Technology
Sixth Sense Technology
 
microsoft Surface computer
microsoft Surface computer microsoft Surface computer
microsoft Surface computer
 
microsoft Surface computer
microsoft Surface computermicrosoft Surface computer
microsoft Surface computer
 

Recently uploaded

08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphNeo4j
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxOnBoard
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhisoniya singh
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?XfilesPro
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 

Recently uploaded (20)

08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptxVulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptx
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
The transition to renewables in India.pdf
The transition to renewables in India.pdfThe transition to renewables in India.pdf
The transition to renewables in India.pdf
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 

Grab and Manipulate Pixels with Collapsible Tools

  • 1. Page | 1 Reach into the computer & Grab a pixel Introduction: hroughout the history of computers we've been striving to shorten the gap between us and digital information, the gap between our physical world and the world in the screen where our imagination can go wild. And this gap has become shorter, shorter, and even shorter, and now this gap is shortened down to less than a millimeter, the thickness of a touch- screen glass, and the power of computing has become accessible to everyone. But I wondered, what if there could be no boundary at all? I started to imagine what this would look like. First, I introduce this tool in below which penetrates into the digital space, so when you press it hard on the screen, it transfers its physical body into pixels. Designers can materialize their ideas directly in 3D, and surgeons canpractice on virtual organs underneath the screen. So with this tool, this boundary has been broken. T
  • 2. Page | 2 Introduce first Tools:- Beyond – Collapsible Tools and Gestures for Computational Design Abstract ince the invention of the personal computer, digital media has remained separate from the physical world, blocked by a rigid screen. We present Beyond, an interface for 3-D design where users can directly manipulate digital media with physically retractable tools and hand gestures. When pushed onto the screen, these tools physically collapse and project themselves onto the screen, letting users perceive as if they were inserting the tools into the digital space beyond the screen. The aim of Beyond is to make the digital 3-D design process straightforward, and more accessible to general users by extending physical affordances to the digital space beyond the computer screen. Keywords 3D Interaction, Augmented Reality and Tangible UI, Pen and Tactile Input, Tactile & Haptic UIs, Pen-based UIs, Tangible UIs . ACM Classification Keywords H5.m. Information interfaces and presentation (e.g., HCI): H.5.2. Input Devices and Strategies. General Terms Design, Human Factors, Experimentation Introduction ecent developments in computer technologies have made the design process much more precise, and scalable. Despite this powerful role the computation plays in design, many designers and architects prefer to build physical models using physical tools and hands, employing their versatile senses and bodily expressions in their early stage of design. There has been no straightforward way to sketch and model 3D forms on the computer screen. S R
  • 3. Page | 3 Tangible User Interfaces have appeared as a strong concept to leverage the traditional ways of design with digital power, while preserving physical affordances,by blurring the boundary between the physical environment and cyberspace. In an attempt to diminish the separations between visual and tactile senses, which are critical for the design process, researchers of Augmented Reality (AR) have suggested several input devices and ways of displaying digital information in more realistic ways. Despite these efforts,a flat monitorand a mouse remain ourstandard interfaces fordesign, leaving digital media apart fromthe physical world blocked by a rigid screen. It is extremely hard for users to select a certain 3-dimensional coordinate in virtual space and sense the volume without wearing special display glasses and using complicated mechanical equipment’s. A parallel trend in CAD, development in gestural interfaces,has allowedusers to employ bodily expressions in data manipulation. However, simple combinations of mouse and gestures on 2D surface does not take full advantage of gestures as bodily expressions and can hardly cover the large number of the commands necessary in design. We present Beyond, an interface for3 dimensional design where users candirectly manipulate 3-D digital media with physically retractable tools and natural hand gestures. When pushed onto the screen, these tools can physically retract and project them- selves ontothe screen, letting users perceive as if they were inserting tools into the digital space beyond the screen. Ourresearch goal is to enable users to design by simply sketching and cutting digital medium using tools, supported by gestures, without having to look at multiple planes at the same time. We believe this effort will make digital 3-D design process straightforward, scalable and more accessible to general users. Related Work arious approaches have been taken to enable users to design in 3D in with more straightforward and intuitive manners by integrating input and output, and by providing users with tangible representations of digital media. The concept of WYSIWYF – “what you see is what you feel” – has been suggested in the domain of Augmented Reality in an attempt to integrate haptic and visual feedback. Technologies like stereoscopic glasses for 3-D display, holograms and wearable mechanical devices such as phantom that provide precise haptic feedbacks are invented and experimented within this context . However, many of these systems require users to wear devices, which are often heavy and cumbersome, intervening natural views and limiting behaviors of the users. Sand- like beads or actuated pin-displays are examples of deformable physical v
  • 4. Page | 4 materials built in anattempt to diminish the separation between input andoutput. However, they are oftennot scalable because solid forms embedded in physical materials are less malleable than pixels. In an effort to convey users’ intentions more intuitively, gesture based interactive sketch tools have been suggested. Most of the systems are based on pen-stroke gesture input, whose functions are limited to simple instant ones such as changing the plane and erasing objects. Oblong’s g-speak is a novel gestural interface platform that supports varieties of gestures and applications including 3-D drawings. However it is still a hard task to select a certaincoordinate inarbitrary space with these systems. What is beyond? eyond is a design platform that allows users to employ their gestures and physical tools beyond the screen in 3- D computational design. Collapsible tools are used in Beyond so that those can retract and project itself onto the screen, letting users perceive asif they were inserting the tools into the screen. This design enables users to perceive workspace in the screen as 3-D space within their physical reach, where computational parametric operationand human direct manipulation can occur together. As a result, this interface helps users design and manipulate digital media, with affordances they have with physical tools (figure 1). Another significant design decision was use of 3-D gestures. We came to the conclusion that gestures in 3D effectively helps users to convey their intentions that are abstract but at the same time related to spatial orshape-related senses. Types of gestural commands used in the Beyond are discussed more in details in the interaction section. B Figure 1: Direct manipulation of digital information beyond the screen with collapsible tools
  • 5. Page | 5 Beyond Prototype: he current Beyond prototype consists of retractable tools, a table-top display and an infrared position tracking system. The tools are designed to retract and stretch with two IR retro-reflective markers attached on both tips. As illustrated in figure 3, Vicon system composed of IR emitters and cameras is used to track these markers, letting the system obtain information about the location, length and tilt of the tools. An additional marker is attached to the users’ head for real time 3D rendering. We implemented two kinds of collapsible tools for the first prototype of Beyond: Pen and Saw (figure 2). Pen: The pen serves as a tool for drawing. This passive tool can specify any 3D coordinate in virtual space within its reach and draw shapes and lines. Saw: The saw serves as a tool for cutting and sculpting. It is designed to provide several different forms of physical actuation when users touched virtual objects. Gestures Beyond uses several gestural interaction techniques mediated by gloves tagged with IR reflective markers tracked by Vicon tracking technologies, which are developed by oblong industries. 3-D rendering techniques based on users’ head position In order to render the scene where physical and digital portions are seamlessly connected, we implemented a software to rear-ender 3D scenes based on users’ head position in real-time. This helps users perceive the objects rendered on the flat screen as 3-D object put behind the screen. T Figure 3: Mechanism for tracking users ' head and tools' positions. Figure 2:Collapsible pen (Top) and saw (bottom).
  • 6. Page | 6 Interaction with Beyond Direct Selection and Drawing Beyond allows users to directly select certain 3D specific coordinates within its physical reachof physical tools in virtual 3D space without looking at multiple planes or wearing head mounted display. By doing so it allows users to sketch in 3D shapes in a straightforward manner, help them externalizing their 3D images in their minds. Touching and Cutting Using Beyond-Saw, users can cut and trim any surface or shape by simply inserting the saw tool into the virtual space. When virtual objects are touched or cut by a tool, a slide actuator installed inside the saw tool creates force feedbacks, preventing the tool fromretracting. By doing so users can interact with digital media with better sense of volume and material properties of virtual objects. Gestural Interactions with Beyond Gestural commands effectively complement tools- mediated direct manipulation by conveying users’ intention to the system in intuitive manners. Users can define several different abstract shapes and operate functions while directly specifying coordinates with the collapsible tools. Figure 4 shows a few examples among several types of gestures. The current Beyond prototype provides shape-related gestural commands such as straight line, square, ellipse and function- related gestural commands including extrude, lock the drawing surface and move objects. For example users can do “straight” gestures to make lines they draw straight, and “extrude” gestures to extrude certain surfaces (Figure 5). New Work flow for 3-D Design Several interaction techniques illustrated in the previous sections can be merged and weaved together to create a new workflow for 3-D computer-aided design as followings. First, users can sketch rough design in 3D with free-line drawing techniques. The next step is to define discreet shape on top of its quick- sketch by specifying locations and Figure 4: Gestures used in Beyond Prototype.
  • 7. Page | 7 other critical parameters with tools and to operate the functions with gestures. In the middle of the design process users can always modify its design by using other types of tools. User Evaluations ince the project is currently at its initial stage, we are planning to conduct comprehensive user evaluations in the future. However, overall feedback we have received forseveral weeks have been that the Beyond platform helps users sketch and model the 3D shape they have in mind and that gestural commands complement directness of tools-based interactions decreasing ambiguities. Discussion and Future ork we introduced Beyond, a design platform that enables users to interact with 3D digital media using physically collapsible tools that seamlessly go into the virtual domain by simple mechanism of collapse, projection, and actuation. Initial user evaluations showed that applying natural hand gestures to convey abstract intention of users greatly complements tools- mediated direct manipulation. We presented the design and implementation of the first Beyond prototype that used Vicon locationtracking system and the physically collapsible pen and saw. The Beyond allows users to directly select certain 3- dimensional coordinate in virtual space. We believe this challenge will help more diverse range of users to access to the technology. While many of the AR or TUI approaches to leverage computational design do not scale up well or too application specific due to inherent rigidity ofphysical handles, the Beyondplatformshows potentials S W Figure 5: Rough 3D sketch (top) and abstract shape drawing (bottom) using gestures
  • 8. Page | 8 to be a more scalable and generalizable user interfaces by seamlessly transforming rigid physical parts to flexible pixels. Since the Beyond is at its initial stage ofdevelopments, we foreseeseveral improvements to Beyond. First, the entire system can be more portable and low-cost by using simple touch screen and camera based tracking technologies. Secondly, we plan to develop more comprehensive gestural languages applicable to design process combined with direct pointing tools. We also plan to improve the force feedback of the active tools by using series elastic actuator, which can create more precise and varieties of force profiles allowing the system to express tactile feedback of various material properties. Finally, we are planning to conduct extensive user evaluations on the system in the near future. Inventor of Beyond: Jinha Lee MIT Media Laboratory 75 Amherst St. Cambridge, MA 02139 USA And Hiroshi Ishii MIT Media Laboratory 75 Amherst St. Cambridge, MA 02139 USA Figure6: A more portable version of Beyond prototype, using camera based tracking and a touch screen
  • 9. Page | 9 Introduction about second tools: ur two hands still remain outside the screen. How can you reach inside and interact with the digital information using the full dexterity of our hands? At Microsoft Applied Sciences redesigned the computer and turned a little space above the keyboard into a digital workspace. By combining a transparent display and depth cameras for sensing your fingers and face,now you can lift up your hands from the keyboard and reach inside this 3D space and grab pixels with our bare hands. Because windows and files have a position in the real space, selecting them is as easy as grabbing a book off your shelf. Then you can flip through this book while highlighting the lines, words on the virtual touch pad below each floating window. Architects can stretch or rotate the models with their two hands directly. So in these examples, we are reaching into the digital world. Explain below:- O
  • 10. Page | 10 Introduce second Tools:- SpaceTop: Integrating 2D and Spatial 3D Interactions in a See-through Desktop Environment ABSTRACT paceTopis a concept that fuses 2D and spatial 3D interactions in a single desktop workspace. It extends the traditional desktop interface with interaction technology and visualization techniques that enable seamless transitions between 2D and 3D manipulations. SpaceTop allows users to type, click, draw in 2D, and directly manipulate interface elements that float in the 3D space above the keyboard. It makes it possible to easily switch fromone modality to another, or to simultaneously use two modalities with different hands. We introduce hardware and software configurations for co-locating these various interaction modalities in a unified workspace using depth cameras and a transparent S a) b) c) d) Figure 1: SpaceTopaffordsa) 3Ddirect spatialinteraction,b) 2D direct touch,c) 2D indirect interaction,andd) typing.Itaims to meld seamsbetweenthesemodalitiesby accommodatingtheminthe sameunifiedspaceandenablingfastswitching betweenthem
  • 11. Page | 11 display. We describe new interaction and visualization techniques that allow users to interact with 2D elements floating in 3D space and present the results from a preliminary user study that indicates the benefit of such hybrid workspaces. Author Keywords: 3D UI; Augmented Reality; Desktop Management ACM ClassificationKeywords H.5.m. Informationinterfaces and presentation (e.g., HCI) General Terms Human Factors; Design; Measurement. INTRODUCTION esktop computing today is primarily composed of 2D graphical user interfaces (GUI) based on a 2D screen with input through a mouse or a touchscreen. While GUIs have many advantages, they can constrain the user due to the limited screen space and interaction bandwidth, and there exists situations where users can benefit from more expressive spatial interactions. For instance, switching between overlapping windows on a 2D screen adds more cognitive loadthan arranging a stack of physical papers in 3D space . While there has been advances in sensing and display technologies, 3D spatial interfaces have not been widely employed in everyday computing. Despite advantages fromspatial memory and increased expressiveness, potential issues related to precision and fatigue make 3D desktop computing challenging. We present SpaceTop, an experimental prototype that brings 3D spatial interaction space to desktop computing environments. We address the previously mentioned challenges in three interdependent ways. First, SpaceTop accommodates both conventional and 3D spatial interactions in the same space. Second, we enable users to switch between 3D I/O and conventional 2D input, or even use them simultaneously with both hands. Finally, we present new interaction and visualization techniques to allow users to interact with 2D elements floating in 3D space. These techniques aim to address issues and confusion that arise from shifting between interactions of different styles and dimensions. RELATED WORK revious work has explored 2.5D and 3D representations to better support spatial memory in desktop. Augmented Reality systems exploit the cognitive benefits of co-locating3D visualizations with direct input in a real environment, using optical combiners .This makes it possible to enable unencumbered 3D input to directly interact with situated 3D graphics in mid-air.SpaceTop extends these concepts with an emphasis on streamlining the switching between input modalities D P
  • 12. Page | 12 in a unified I/O space, and the combination of such 3D spatial interaction with other conventional input modalities, to enable new interaction techniques. Other related research explores the transitions between 2D and 3D I/O by combining multi-touch with 3D direct interaction , or through 2D manipulation of 3D stereoscopic images , with an emphasis on collaborative interaction with 3D data, such as CAD models. Our work focuses on how daily tasks, such as document editing or task management, canbe better designed with 3D spatial interactions in existing desktop environments. SPACETOP IMPLEMENTATION t first glance, SpaceTop looks similar to a conventional desktop computer, except the transparent screen with keyboard/mouse behind it. Users place their hands behind the screen to scroll on the bottom surface, or type on the keyboard. Through the transparent screen, users can view graphical interface elements appearing to float, not only on the screen plane, but also in the 3D space behind it or on the bottom surface. The users can lift their hands off the bottom surface to grab and move floating windows or virtual objects using “pinch” gestures. We accommodate 3D direct interaction, 2D touch and typing, with an optically transparent LCD screen and two depth cameras (Figure 2) in a 50×25×25 cm3 volume. Display: Prototype LCD with per-pixel transparency We use a display prototype by Samsung, designed to show graphics without backlights in contact with its transparent LCD. The 22" transparent LCD displays 1680×1050 pixels images at 60 Hz with 20% light transmission. It provides maximum transparency for white pixels, and full opaqueness forblack pixels. We use the unique per-pixel transparency to control the opacity of A
  • 13. Page | 13 graphical elements, allowing us to design UIs that do not suffer from the limitations of half-silver mirror setups, where pixels are always partially transparent. We ensure that all graphical elements include clearly visible opaque parts, and use additional lights for the physical space behind the screen, to improve the visibility of the user’s hands and keyboard. Head and hand tracking with depth cameras One depth camera (Microsoft Kinect) faces the user and tracks the head to enable motion parallax. This allows the user to view graphics correctly registered on top of the 3D interaction space wherein the hands are placed. Another depth camera points down towards the interaction space and detects position and pinch-gestures of the user’s hands . The setup also detects if the user’s hands are touching the 2D input plane based on the technique described in .
  • 14. Page | 14 INTERACTION AND VISUALIZATION 2D in 3D: Stack Interaction In SpaceTop, graphical UI elements are displayed on the screen or in the 3D space behind it. Inour scenarios, details or 2D views of 3D elements are shown on the foreground plane (coinciding with the physical screen). While objects can take various forms in 3D space, we chose to focus on window interaction and 2D content placed in 3D space, such that the system canbe used forexisting desktop tasks. Another advantage of the window form factorin 3D is that it saves space when documents are stacked. It can, however, become challenging to select a particular window fromthe dense stack. We designed various behaviors of stacks and windows to ease retrieval in stacks, as illustrated in Figures 3a–f. Users candrag-and-drop a window fromone stack to another, to cluster it. As the user hovers his finger inside a stack, the layer closest to the user’s finger gets enlarged and more opaque. When the user pinches on the stack twice, the dense stack expands to facilitate selection. The surface area below the stack is used for 2D gestures, such as scrolling. Users can, forexample, scroll on the bottom surface of the stack to change the order of the documents in the stack. We designed a Grid and Cursor system to simplify the organization of items in 3D. It provides windows and stacks with passive reference cues, which help guide the user’s hands. The cursor is represented as two orthogonal lines parallel to the ground plane that intersect at the user’s finger tips. These lines penetrate the grid box that represents the interaction volume, illustrated in Figure 3a. Modeless Interaction Our guiding principle fordesigning high-level interfaces and visualizations is to create a seamless and modeless workflow. Experiments have shown that when users shift from one interaction mode to another, they have to be visually guided with care, such that the user can mentally accommodate the new interaction model. Particularly, smooth transition between
  • 15. Page | 15 2D and 3D views, and between indirect and direct interactions are challenging, since each of them is built on largely different mental models of I/O. Sliding Door: Entering the Virtual 3D space In the 2D interaction mode, the user can type or use a mouse or touchpad to interact with SpaceTop, as in any conventional 2D system. When the user lifts her hands, the foreground window slides up or fades out to reveal the 3D space behind the main window. When the hands touch the bottom surface again, the foreground window slides down again, allowing users to return to 2D-mapped input. The sliding door metaphor can help users smoothly shift focus from the “main” 2D document to “background” contents floating behind (See Figures 3b-c). Shadow Touchpad: One Touchpad per window Touchpad interaction with 2D windows floating in 3D space introduces interesting challenges. Especially when working with more than one window, it is not straightforward how to move a cursor from one window to another. Indirect mapping between the touchpad and the window canconflict with the direct mapping that each window is forming with the 3D space. To address this issue, we propose a novel concept called Shadow touchpad, which emulates a 2D touchpad below each of the tilted 2D documents floating in 3D space. When a window is pulled up, a shadow is projected onto the bottom surface, whose area functions as a touchpad that allows the user to interact with that window. When multiple screens are displayed, each of them has its own shadow touchpad area.
  • 16. Page | 16 Inter-shadow translation of 2D element Users can move 2D objects (e.g., text and icons) from one window to another by dragging the object between the corresponding shadow areas. The object will be visualized as a floating 3D object during the transition between the two shadow touchpads, similarly to the balloon selection technique, as shown in Figure 5. Task Management Scenario Effective management of multiple tasks has been a central challenge in everyday desktop computing. In SpaceTop, the background tasks occupy a fixed position in the 3D space behind the main task, allowing users to rely on their spatial memory to retrieve them. This spatial persistence mitigates some of the cognitive load associatedwith conventional task management systems. Sliding door or stack interaction can be directly applied to categorize, remember, and retrieve tasks (Figure 4). Bimanual, Multi-fidelity Interaction Interesting interactions arise when each hand is interacting in different styles and fidelities. The following applications demonstrate the potential of such bimanual, multi-fidelity interaction. Document Editing Scenario When composing a document, the user often needs to copy portions from other documents, such as, previous drafts or outside sources. SpaceTop allows the user to use the dominant hand to scroll through a main document, while simultaneously using the other hand to quickly flip through a pile of other documents, visualized in 3D space, to find a relevant piece of text. The user can then drag that piece of text into the main document through the more precise touchpad interaction. In this way, SpaceTop allows users to quickly switch back-and-forth between low-bandwidth , high-precision interactions (copying lines) and high bandwidth, low-precision interactions (rifling through documents), or use them simultaneously.
  • 17. Page | 17 3D Modeling Scenario While 3D spatial interactions provide means forthe user to materialize their design through spatial expression, much of the interaction in CAD require precise manipulation and is controlled in 2D. SpaceTop allows for natural transitions between these interactions modes. The user can start prototyping a model with free-formmanipulation. Once fine control is required, the user can select a surface of the 3D model and pull up an editing console to the foreground of the screen. The user can then precisely modify dimensions by dragging a side, typing a number, or choose material properties by touching a 2D palette on the ground. PRELIMINARY USER EVALUATION en participants (age 19–29, 2 female) were recruited from a university mailing list, none of whom had previous experience with 3D user interfaces. They were able to familiarize themselves with the system until they performed each actioncomfortably (3–6 min). The total experiment time forparticipants was between 70-80 min. Switching between indirect 2D vs. direct 3D interaction 12 partially overlapping coloredwindows (red, green, blue, or yellow), containing a shape (triangle, square or star), were shown. Participants were given tasks, such as “grab the yellow square and point to its corners”, or “trace the outline of the blue triangle”. They performed four different, randomized tasks for three spatial window configurations, for a total of 12 trials for eachof two blocks. The SpaceTopblock used spatial window placement with head-tracking and participants used a combination of gesture, mouse and keyboard interaction, for constant switching between typing, 2D selection and 3D interaction. In the baseline block, windows were shown in the display’s 2D plane and only mouse and keyboard interaction was available. Questionnaire responses (5-point Liker scale) indicate that the SpaceTop interactions were easy-to-learn(3.9). Participants did however find it slower (3.2 vs. 4.2) and less accurate (3.2 vs. 4.6) than the baseline. Users’ comments include T
  • 18. Page | 18 “afterI repeated this task three times (with the same arrangement), my arm starts moving towards the target even before I see it”, “switching to another window is as simple as grabbing another book on my (physical) desk”. Another user commented that the physical setup constrains his arm’s movement which makes him exhausted easier. Text editing: Search and copy/paste Participants skimmed the contents of six different document pages placed in the 3D environment. They were then asked to find a specific word and pick-and-drop it into the document on the foreground screen (see Figure 5). Six participants commented that it felt compelling to be able to quickly rifle through a pile of documents with one hand while another hand is interacting with the main active task. One user commented: “it feels like I have a desktop computer and a physical book next to it”. “This feels like a natural role division of right/left hand in the physical world”. Three users reported that they had a hard time switching their mental models from 2D indirect mapping (touchpad) to 3D direct mapping (spatial interaction), which occurs when the user tries to drag a word out of a shadow. DISCUSSION sers’ comments suggest that fast switching and bi-manual interaction provide compelling experiences, and that they canbenefit from spatial memory (task 1). We also gained some insights forfuture improvements. A few users also commented that they might perform better with a stereoscopic display, in addition to the aid of the grid and cursor. Although previous work indicates that stereoscopy has limited benefit over horoscopic display with motion parallax , we plan to also explore a stereoscopic version of SpaceTop. We think that the visual representation could be better designed to provide users with clearer guidance. While the current configuration allows us to rapidly prototype and explore interactions, we plan to improve ergonomics and general usability with careful design of the physical setup. CONCLUSIONS AND FUTURE WORK U
  • 19. Page | 19 paceTop is a concept that accommodates 3D and conventional 2D (indirect/direct) interactions in a single workspace. We designed interaction and visualization techniques for melding the seams between different interaction modalities and integrating them into modeless workflows. Our application scenarios showcase the power of such integrated workflows with fast switching between interactions of multiple fidelities and bimanual interactions. We believe that SpaceTop is the beginning of an exploration of a larger field of spatial desktop computing interactions and that our design principles can be applied to a variety of current and future technologies. We hope that this exploration offers guidelines for future interaction designers, allowing better insight into the evolution of the everyday desktop experience. Inventor of SPACETOP Jinha Lee MIT Media Laboratory And Microsoft Applied Sciences Group Hiroshi Ishii MIT Media Laboratory Alex Olwal MIT Media Laboratory Cati Boulanger Microsoft Applied Sciences Group S
  • 20. Page | 20 Introduction about thread tools: ow about reversing its role of previous tools and having the digital information reach us instead? I'm sure many of us have had the experience of buying and returning items online. But now you don't have to worry about it. What I got here is an online augmented fitting room. This is a view that you get from head-mounted or see- through display when the system understands the geometry of your body. This is upcoming project ortools. So, I get some imitated information this information is given below: Introduce thread Tools:- WYSIWYF –“what you see is what you feel” ABSTRACT he digital information reach us instead? I'm sure many of us have had the experience of buying and returning items online. But now you don't have to worry about it. What I got here is an online augmented fitting room. This is a view that you get from head-mounted or see-through display when the system understands the geometry of your body. Keywords 3D Interaction, Augmented Reality and Tangible UI,Screen base and mobile Input, Tactile & Haptic UIs,and Transient - based UIs,Tangible UIs, online shopping, buying and returning items, online augmented fitting room, fitting room. General Terms Online augmented fitting room, Human Factors, Experimentation. H T
  • 21. Page | 21 INTRODUCTION 'm sure many of us have had the experience of buying and returning items online. But now you don't have to worry about it. What I got here is an online augmented fitting room. This is a view that you get from head-mounted or see-through display. The concept of WYSIWYF –“what you see is what you feel” –has been suggested in the domain of Augmented Reality in an attempt to integrate haptic and visual feedback . Technologies like stereoscopic glasses for 3-D display. RELATED WORK arious approaches have been taken to enable users to design in 3D in with more straightforward and intuitive manners by integrating input and output, and by providing users with tangible representations of digital media .Beyond is a design platform that allows users to employ their gestures and physical tools beyond the screen in 3-D computational design. Collapsible tools are used in Beyond so that those can retract and project itself onto the screen, letting users perceive as if they were inserting the tools into the screen. This design enables users to perceive workspace in the screen as 3-D space I V
  • 22. Page | 22 within their physical reach, where computational parametric operation and human direct manipulation can occurtogether. As a result, this interface helps users design and manipulate digital media, with affordances they have with physical tools. SpaceTop See Through 3D desktop is a 3D spatial operating environment that allows the user to directly interact with his or her virtual desktop. The user can reach into the projected 3D output space with his/her hands to directly manipulate the windows. Users can casually open up the See-Through 3D Desktop and Type on the keyboard or use track pad as in traditional 2D operating environment. Windows or files are perceived to be placed in a 3D space between a screen and the input plane. The user can lift up his hands to reach the displayed windows and arrange them in this 3D space. A unique combination of a transparent display and 3D gesture detection algorithm collocates input space and3D rendering without tethering or encumbering users with wearable devices. See-through 3D desktop is a term for the entire ensemble of necessary software hardware and design technological components for realizing this volumetric operating environment. What is WYSIWYF? he concept of WYSIWYF –“what you see is what you feel” –has been suggested in the domain of Augmented Reality in an attempt to integrate haptic and visual feedback . Technologies like stereoscopic glasses for3-D display. I'm sure many T
  • 23. Page | 23 of us have had the experience of buying and returning items online. But now you don't have to worry about it. What I got here is an online augmented fitting room. This is a view that you get fromhead-mounted orsee-through display. WYSIWYF IMPLEMENTATION Display: Prototype LCD with per- pixel transparence We use a display prototype by Samsung, designed to show graphics without backlights in contact with its transparent LCD. The 22" transparent LCD displays 1680×1050 pixels images at 60 Hz with 20% light transmission. It provides maximum transparency for white pixels, and full opaqueness forblack pixels. We use the unique per-pixel transparency to control the opacity of graphical elements, allowing us to design UIs that do not suffer from the limitations of half-silver mirror setups, where pixels are always partially transparent. We ensure that all graphical elements include clearly visible opaque parts, and use additional lights for the physical space behind the screen, to improve the visibility of the user’s hands and keyboard. Hand tracking with depth cameras One depth camera (Microsoft Kinect) faces the user and tracks the hand to enable motion parallax. This allows the user to view graphics correctly registered on top of the 3D interaction space wherein the hand are placed.
  • 24. Page | 24 3D model enable shopping website: Use 3D model of item enable shopping website which are sell. DISCUSSION Users’ comments suggest that fast switching and bi-manual interaction provide compelling experiences, and that they canbenefit from spatial memory (task 1). We also gained some insights forfuture improvements. A few users also commented that they might perform better with a stereoscopic display, in addition to the aid of the grid and cursor. Although previous work indicates that stereoscopy has limited benefit over horoscopic display with motion parallax ,. We think that the visual representation could be better designed to provide users with clearer guidance. While the current configuration allows us to rapidly prototype and explore interactions, we plan to improve ergonomics and general usability with careful design of the physical setup.
  • 25. Page | 25 CONCLUSIONS AND FUTURE WORK YSIWYF is a concept that accommodates 3D in a single workspace. We believe that SpaceTop is the beginning of an exploration of a larger field of buyer and customer interactions and that our design principles can be applied to a variety of current and future technologies. We hope that this exploration offers guidelines forfuture interaction designers, allowing better insight into the evolution of the everyday online shopping experience. Inventor of WYSIWYF: Jinha Lee MIT Media Laboratory 75 Amherst St. Cambridge, MA 02139 USA And Daewung Kim(Collaboration) MIT Media Laboratory 75 Amherst St. Cambridge, MA 02139 USA W
  • 26. Page | 26 Introduction about forth tools: aking this idea further, I started to think, instead of just seeing these pixels in our space, how can we make it physical so that we can touch and feel it? What would such a future look like? At MIT Media Lab, created this one physical pixel. Well, in this case, this spherical magnet acts like a 3D pixel in our space, which means that both computers and people can move this object to anywhere within this little 3D space. What we did was essentially canceling gravity and controlling the movement by combining magnetic levitation and mechanical actuation and sensing technologies. And by digitally programming the object, we are liberating the object fromconstraints of time and space, which means that now, human motions can be recorded and played back and left permanently in the physical world. So choreography can be taught physically over distance and Michael Jordan's famous shooting can be replicated over and over as a physical reality. Students canuse this as a tool to learn about the complex concepts such as planetary motion, physics, and unlike computer screens or textbooks, this is a real, tangible experience that you cantouch and feel, and it's very powerful. And what's more exciting than just turning what's currently in the computer physical is to start imagining how programming the world will alter even our daily physical activities. T
  • 27. Page | 27 Introduce forth Tools:- ZeroN: Mid-Air Tangible Interaction Enabled by Computer Controlled Magnetic Levitation ABSTRACT eroN, a new tangible interface element that can be levitated and moved freely by computer in a three dimensional space. ZeroN serves as a tangible representation of a 3D coordinate of the virtual world through which users can see, feel, and control computation. To accomplish this we developed a magnetic control system that can levitate and actuate a permanent magnet in a predefined 3D volume. This is combined with an optical tracking and display system that projects images on the levitating object. We present applications that explore this new interaction modality. Users are invited to place or move the ZeroN object just as they canplace objects on surfaces. For example, users can place the sun above physical Z
  • 28. Page | 28 objects to cast digital shadows, orplace a planet that will start revolving based on simulated physical conditions. We describe the technology, interaction scenarios and challenges, discuss initial observations, and outline future development. ACM Classification: H5.2 [Informationinterfaces and presentation]: User Interfaces. General terms: Design, Human Factors Keywords: Tangible Interfaces, 3D UI. INTRODUCTION angible interfaces attempt to bridge the gap between virtual and physical spaces by embodying the digital in the physical world . Tabletop tangible interfaces have demonstrated a wide range of interaction possibilities and utilities. Despite their compelling qualities, tabletop tangible interfaces share a common constraint. Interactionwith physical objects is inherently constrained to 2D planar surfaces due to gravity. This limitation might not appear to be a constraint for many tabletop interfaces, when content is mapped to surface components, but we argue that there are exciting possibilities enabled by supporting true 3D manipulation. There has been some movement in this direction already; researchers are starting to explore interactions with three-dimensional content using space above the tabletopsurfaces . In these scenarios input can be sensed in the 3D physical space, but the objects and rendered graphics are still bound to the surfaces. Imagine a physical object that can float,seemingly unconstrained by gravity, and move freely in the air. What would it be like to leave this physical object at a spot in the air, representing a light that casts the virtual shadow of an architectural model, or a planet which will start orbiting? Our motivation is to create such a 3D space, where the computer can control the T
  • 29. Page | 29 3D position and movement of gravitationally unconstrained physical objects that represent digital information. we present a system fortangible interaction in mid-air 3D space. At its core, our goal is to allow users to take physical components of tabletop tangible interfaces off .The surface and place them in the air. To investigate these interaction techniques, we created our first prototype with magnetic levitation technology. We call this new tangible interaction element ZeroN, a magnetically actuated object that can hover and move in an open volume, representing digital objects moving through 3D coordinates of the virtual world. Users can place or move this object in the air to simulate or affect the 3D computational process represented as actuation of the object as well as accompanied graphical projection. We contribute a technical implementation of magnetic levitation. The technology includes stable long-range magnetic levitation combined with interactive projection, optical and magnetic sensing, and mechanical actuation that realizes a small ‘anti-gravity space’. In the following sections, we describe our engineering approach and the current limitations as well as a road map of development necessary to scale the current interface. We investigate novel interaction techniques through a set of applications we developed with ZeroN. Based on reflection from our user observation, we identify design issues and technical challenges unique to interaction with this untethered levitated object. In the followingdiscussion, we will refer to the levitated object simply as ZeroN and the entire ensemble as the ZeroN system.
  • 30. Page | 30 RELATED WORK ur work draws upon the literature of Tangible Interfaces, 3D display and interaction techniques. As we touch upon the evolution of tabletop tangible interfaces, we review movements towards employing actuationand 3D space in human computer interaction. Tabletop Tangible Interfaces nderkoffler and Patten have shown how the collaborative manipulation of tangible input elements by multiple users can enhance task performance and creativity in spatial applications, such as architecture simulation and supply chain optimization. Reactable, AudioPad, or Datatiles show compelling qualities of bimanual interaction in dynamically arranging visual and audio information. In previous tabletop tangible interfaces, while users can provide input by manipulating physical objects, output occurs only through graphical projection. This can cause inconsistency between physical objects and digital information when the state of the underlying digital system changes. Adding actuation to an interface, such that states of physical objects are coupled with dynamically changing digital states will allow the computer to maintain consistency between the physical and digital states of objects. In Actuated Workbench, an array of computer controlled electromagnets actuates physical objects on the surface, which represent the dynamic status of computation. Planar Manipulator or Augmented Coliseum achieved similar technical capabilities using robotic modules. Recent examples of such actuated tabletop interfaces include midget, a system that has the capability of actuating complex tangibles composed of multiple parts. Patten’s PICO has demonstrated how physical actuation can enable users to improvise mechanical constraints to add computational constraint in the system. O U
  • 31. Page | 31 Going Higher ne approach to the transition of 2D modalities to 3D has been using deformable surfaces as input and output. Illuminating Clay employs deformable physical material as a medium of input where users candirectly manipulate the state of the system. In Lumino, stackable tangible pucks are used to express discrete height as another input modality. While in this system the computer cannot modify physical representation, there has been research in adding height as another output component to RGB pixels using computer controlled actuation. Poupyrev, et.al provide an excellent overview of shape displays. To actuate deformable surfaces, Lumen and FEELEX employ an array of motorized sticks that can be raised. Art+com’s kinetic sculpture actuates multiple spheres tethered with string to create the silhouette of cars. Despite their compelling qualities as shape display, they share two common limitations as interfaces. First, input is limited by the push and pull of objects, whereas more degrees of freedom of input may be desired in many applications; users might also want to push or drag the displayed object laterally. More importantly, because the objects are physically tethered, it is difficult forusers to reach under or above the deformable surface in the interactive space. Using Space above the Tabletop surface illiges and et al. show that 3D mid-air input can be used to manipulate virtual objects on a tabletop surface using the second-light infrastructure . Grossman et.al introduced interaction techniques with 3D volumetric display. While they demonstrate a potential approach of exploiting real 3D space as an input area, the separation of a user’s input from the rendered graphics does not afforddirect control as in the physical world, and may lead to ambiguities in the interface. A remedy for this issue of I/O inconsistency may come from technologies that display free-standing volumetric images, such as digital holography. However these technologies are not yet mature, and even when they can be fully implemented, direct manipulation of these media would be challenging due to lack of a persistent tangible representation. Haptic and Magnetic Technologies for 3D Interaction O H
  • 32. Page | 32 Studies with haptic devices, such as Phantom, have shown that accurate force feedback can increase task performance in the context of medical training and 3D modeling. While most of these systems were used with a single monitor or head-mounted display, Plesniak’s system lets users directly touch a 3D holographic display to obtain input and output coincidences. Despite their compelling practical qualities, tethered devices constrain the degree of freedom in user input. In addition, constraining the view angle often isolates the user from real world context and restricts multi-user scenarios. Magnetic levitation has been researched in the realms of haptic interfaces and robotics to achieve increased degrees of freedom. Berkelman and et al. developed a high-performance magnetic levitation haptic interfaces to enable the user to better interact with simulated virtual environments. Since their system was designed to be used as a haptic controller of graphical displays, the emphasis was on creating accurate force feedback with a stable magnetic field in a semi-enclosed hemispherical space. On the other hand, our focus was on achieving a collocatedI/O by actuating an I/O object along the 3D paths through absolute coordinates of the physical space. Consequently, more engineering efforts were made to actuate a levitated object in an open 3D space in a reasonably stable manner. 3D and Tangible Interaction Grossman and Wigdor present an excellent taxonomy and framework of 3D tabletop interfaces based on the dimensions of display and input space. Our work aims to explore a realm where both display and input occur in 3D space, mediated by a computer-controlled tangible object, and therefore enabling users’ direct manipulation. In the taxonomy, physical proxy was considered an important 2D I/O element that defines user
  • 33. Page | 33 interaction. However, our work employs a tangible proxy as an active display component to convey 3D information. Therefore, to fully understand the implication of the work, it is necessary to create a new framework based on spatial properties of physical proxies in tabletop interfaces. We plotted existing tabletop interfaces in figure 3 based on the dimension of the I/O space and whether the tangible elements can be actuated. Explores this novel design space of tangible interaction in the mid-air space above the surface. While currently limited in resolution and practical quality, we look to study what is possible by using mid-air 3D space fortangible interaction. We aim to create a system where users can interact with 3D information through manipulating tethering by mechanical armatures or requiring users to wear an optical device such as a head-mounted display. OVERVIEW Our system operates over a volume of 38cm x 38cm x 9cm, in which it can levitate, sense, and control the 3D position of the ZeroN, a spherical magnet with a 3.17cm diameter covered with plastic shell onto which digital imagery can be projected. As a result, the digital information bound with a physical object can be seen, felt, and manipulated in the operating volume without requiring users to be tethered by mechanical armatures or to wear optical devices. Due to the current limitation of the levitation range, we made the entire interactive space is larger than this ‘anti-gravity’ space, such that users can interact with ZeroN with reasonable freedom of movement
  • 34. Page | 34 TECHNICAL IMPLEMENTATION The current prototype comprises five key elements as illustrated in figure 4. • A magnetic levitator(a coildriven by PWM signals) that suspends amagnetic object and is capable of changing the object's vertical suspension distance on command. • A 2-axis linear actuationstage that laterally positions the magnetic levitator and one additional linear actuator for moving the coil vertically. • Stereo cameras that track Zero’s 3D position. • A depth camera to detect users’ hand poses. • A tabletop interface displaying a scene coordinated with the position of the suspended object and other objects placed on the table. Untethered 3D Actuation The ZeroN system implements untethered 3D actuation of a physical object with magnetic control and mechanical actuation. Vertical motion was achieved by combining magnetic position control which can levitate and move a magnet relative to the coil, and mechanical actuation that can move the entire coil relative to the entire system. Two approaches complement each other. Although the magnetic approach can control the position with lower latency and implies promising direction forscalable magnetic propulsion technology, the prototype with pure magnetic controls demonstrated limits in its range: when the permanent magnet gets too close to the coil it becomes attached to the coil even when the coil is not energized. 2D lateral motion was achieved with a plotter using two stepper motors. Given a 3D path as input, the system first projects the path on each dimension, and linearly interpolates the
  • 35. Page | 35 dots to create a smooth trajectory. Then the system calculates velocity and accelerationof eachaxis of actuation as a function of time. With this data, the system can actuate the object along a 3D path approximately identical to the input path. Magnetic Levitation and Vertical Control We have developed a custom electromagnetic suspension system to provide robust sensing, levitation, and vertical Control. It includes a microcontroller implementing a proportional-integral-derivative (PID) control loop with parameters that can be set through a serial interface. In particular, Zero’s suspension distance is set through this interface by the UI coordinator. The PID controller drives the electromagnet through a coil driver using pulse- width modulation (PWM). The field generated by the electromagnet imposes an attractive (or repulsive) force on the suspended magnetic object. By dynamically canceling gravity by exerting a magnetic force on ZeroN, the control loop keeps it suspended at a given distance from the electromagnet. This distance is determined by measuring the magnetic field immediately beneath the solenoid.
  • 36. Page | 36 Magnetic Range Sensing with Hall-effect sensor Properly measuring the distance of a magnet is the key component in stable levitation and vertical control. Since the magnetic field drops off as the cube of the distance from the source, it is challenging to convert the strength of the magnetic field to the vertical position of a magnet. To linearize signals sensed by the hall-effect sensor, we developed the two-stepgain logarithmic amplifier. It logarithmically amplifies the signal with two different gains, based on whether the signal exceeded a threshold voltage value. Designing ZeroN Object We used a spherical dipole magnet as a levitating object. Due to the geometry of magnetic field, users can move the spherical dipole magnet while still keeping it suspended, but it falls when they tilt it. To enable input of a user’s desired orientation, a loose plastic layer is added to cover the magnet as illustrated in figure 7. Stereo Tracking of 3D position and 1D orientation We used two modified Sony PS3Eyecams1 to track the 3D position of ZeroN using computer vision techniques with a pair of infrared images as in figure 8. To measure orientation, we applied a stripe of retro-reflective tape to the surface of ZeroN. We chose this approach because it was both technically simple and robust, and didn’t add significant weight to ZeroN: an important factorin a levitating object.
  • 37. Page | 37 Determining Modes A challenge in emulating the ‘anti-gravity space’ is to determine if ZeroN is being moved by a user, oris naturally wobbling. Currently, ZeroN sways laterally when actuated, and the system can misinterpret this movement foruser input and continue to update a new stable point of suspension. This causes ZeroN to drift around. To resolve this issue, we classify three modes of operation (idle, grabbed, grabbed forlong) based on whether, and forhow long, the user is holding the object. In the idle mode, when ZeroN is not grabbed by the user, the control system acts to keep the position or trajectory of the levitating object as programmed by the computer. When grabbed by the user, the system updates the stable position based on the current position specified by the users, such that the users can release their hands without dropping the object. If the user is grabbing the object for longer than 2.5s, it starts specific functions such as record and play back.
  • 38. Page | 38 While stereo IR cameras were useful in obtaining the accurate position and orientation of the object using retro reflective tape, it was challenging to distinguish users’ hands from background orobjects. We chose to use an additional depth camera Microsoft Kinect to detect the user’s hand pose with computer vision techniques built on top of Open-source libraries. Our software extracts binary contours of objects at a predefined depth range and finds the blob created between the user’s hands and the levitated object. Calibration of 3D Sensing, Projection, and Actuation To ensure real time interaction, careful calibration between cameras, projectors and 3D actuation system is essential in our implementation. After finding correspondence between two cameras with checkerboard patterns, we register cameras with the coordinate of interactive space. We position the ZeroN object at eachof these fixed four non-coplanarpoints. Similarly, to register each projector to real-world coordinates, we match the ZeroN positioned at the four non-coplanar calibration points and move a projected image of a circle towards the ZeroN. When the circular image is overlaid on the ZeroN, we increase
  • 39. Page | 39 or decrease the size of the circle image so that it matches the size of ZeroN. This data is used to find two homogenous matrices that transform raw camera coordinates to real world coordinates of the interactive space, and the real coordinates to x, y position and the diameter of the circle. We have not made much effort to optimally determine the focal plane of the projected image - focusing the projectors roughly in the middle of the interactive space is sufficient. Engineering ‘Anti-Gravity’ Space These various sensing and actuationtechniques coordinate to create a seamless ‘anti-gravity’ I/O space. When the user grabs the ZeroN and places it within the defined space of the system, the system tracks the 3D position of the object, and determines if the user’s hand is grabbing ZeroN. The electromagnet is then carried to the 2D position of ZeroN by the 2-axis actuators, and is programmed to reset a new stable point of suspension at a sensed vertical position. As a result, this system creates what we will call a small ‘antigravity’ space, wherein people can place an object in a volume seemingly
  • 40. Page | 40 unconstrained by gravity. The user’s hands and other non-magnetic materials do not affect levitation. Since the levitation controller acts to keep the floating object at a given height, users experience the sensation of an invisible but very tangible mechanical connection between the levitated magnet and a fixed point in space that can be continually updated. 3D POINT AND PATH DISPLAY ZeroN serves as a dynamic tangible representation of a 3D coordinate, without being tethered by mechanical armature. 3D Position of ZeroN may be updated upon computer commands to present dynamic movements or curved lines in the 3D space such as flight paths of the airplane or orbits of planets. Graphical images or icons may be projected upon the white surface of ZeroN levitating, such as a camera or the pattern of a planet. These graphical images can be animated or ‘tilted’ to display change of orientation. This complements the limitation of current magnetic actuation system that can only control the 3D position of a magnet, but has little control on its orientation.
  • 41. Page | 41 INTERACTION We have developed a 3D, tangible interaction language that closely resembles how people interact with physical objects on a 2D surface – put, move, rotate, and drag, which now serves as a standard metaphor, widely used in many interaction design domains including GUIs and tabletop interfaces. We list the vocabulary of our interaction language (figure 12). Place One can place ZeroN in the air, suspending it at an arbitrary 3D position within the interactive space. Translate Users can also move ZeroN to another position in the antigravity space, without disturbing its ability to levitate. Rotate When users rotate the plastic shell covering the spherical magnet, digital images projected onthe ZeroN will rotate accordingly. Hold Users can hold or block ZeroN to impede computer actuation. This can be interpreted as computational constraint as also shown in PICO. Long Hold We implemented a long-hold gesture that can be used to initiate a specific function. For example, in a video recording application, we might have an interaction where users could hold the ZeroN forlonger than 2.5 seconds to initiate recording, and release to enter “play-back” mode. Attaching / Detaching Digital Information to the ZeroN We borrowed a gesture for attaching / detaching digital items to tabletop interfaces it is challenging to interact with multiple information clusters, since the current system can only levitate one object. For instance, in the application of urban planning simulation, users might first want to use ZeroN as the Sun to control lighting, and then as a camera to render the scene. Users
  • 42. Page | 42 can attachZeroN to a digital item projected on the tabletop surface on the ground, just by moving the ZeroN close to the digital item to be bind with. To unbind a digital item froma ZeroN, users can use shaking gestures or remove the ZeroN from the interactive space. Interaction with Digital Shadows We aim to seamlessly incorporate ZeroN into existing tabletop tangible interfaces. One of the challenges is to provide users with a semantic link between the levitated object and tabletop tangible interfaces on the 2D surface. Since ZeroN is not physically in contact with the tabletop system, it is hard to recognize the relative position of the ZeroN to the other objects placed on the ground. We designed an interactive digital shadow to provide users with visible links between ZeroN and other part of the tabletop tangible interfaces. For instance, levitating ZeroN itself cancast its digital shadow whose size is mapped to the height of the object (see figure 13). For the time being, however, this feature is not yet incorporated in the application scenarios. APPLICATIONS AND USER REFLECTION We explore the previously described interaction techniques in context of several categories of applications described below. While the physics and architecture simulation allows users to begin using ZeroN to address a practical problem, the prototyping animation and Zero-pong applications are proof of concepts to demonstrate the interactions one might have with ZeroN.
  • 43. Page | 43 Physics Simulation and Education ZeroN can serve as a tangible physics simulator by displaying and actuating physical objects under computationally controlled physical conditions. As a result, dynamic computer simulation can turn into tangible reality, which had previously been possible only in the virtual world. More importantly, users can interrupt or affect the simulation process by blocking actuationwith their hands or by introducing other physical objects in the ZeroN space. Understanding Kepler’s Law In this application, users can simulate a planet's movement in the solarsystem by placing at the simulation’s center, a static object that represents the center of mass as the Sun, Around which the ZeroN will revolve like a planet. Users canchange the distance between the Sun and the planet, which will make the ZeroN snap to another orbit. Resulting changes canbe observed and felt in motion and speed. Digital projection shows the area that a line joining a ZeroN and the Sun sweeps out during a certain period of time, confirming Kepler's 2nd law (see figure 15).
  • 44. Page | 44 Three-Body Problem In this application, users can generate a gravity field by introducing multiple passive objects that represent fixed centers of gravity. A placed ZeroN next to the object will orbit around based on the result of the 3–body simulation. Users can add or change the gravitational field by simply placing more passive objects, which can be identified by a tabletop interface setup (see figure 15). Architectural Planning While there has been much research exploring tangible interfaces in the space of architectural planning, some of the essential components, such as lights or cameras, cannot be represented as a tangible object that can be directly manipulated. For instance, up system allows users to directly control the arrangement of physical buildings, but lighting can only be controlled by rotating a separate time dial. While it is not our goal to stress that direct manipulation outperforms indirect manipulation, there are certainly various scenarios where having direct manipulation of tangible representation is important. We developed two applications forgathering users’ feedback. Lighting Control We developed an application forcontrolling external architectural lighting in which users can grab and place a Sun in the air to control the digital shadow cast by physical models
  • 45. Page | 45 On the tabletop surface. The computer can simulate changes in the position of the lighting, such as chances over the day, and the representative Sun will be actuated to reflect these changes. Camera Path Control Users can create 3D camera paths for rendering virtual scenes using ZeroN as a camera. Attaching ZeroN to the camera icon displayed on the surface turns the sun into a camera object. Users can then hold the ZeroN fora number of seconds in one position to initiate a recording interaction. When users draw a 3D path in the air and release the ZeroN, the camera is sent back to initial position and then Moved along the previously recorded 3D trajectory. On an additional screen, users can see the virtual scene of their model taken by the camera’s perspective in real time. If the user wants to edit this path, they canintervene with the camera’s path and start from the exact current position of the camera to redraw another path. 3D Motion Prototyping Creating and editing 3D motion for animation is a long and complex process with conventional interfaces, requiring expert knowledge of the software,even for simple prototyping. With record and play-back interaction, users can easily prototype the 3D movement of an object and watch it playing back in the real world. The motion can possibly be mapped to a 3D digital character moving accordingly on the screen with dynamic virtual environments. As a result, users can not only see, but also feel the 3D motion of the object they created. They can go through this interaction through a simple series of gestures; long-hold and release.
  • 46. Page | 46 Entertainment: Tangible 3D Pong in the physical space Being able to arbitrarily program the movement of a physical object, ZeroN can be used for digital entertainment. We partially implemented and demonstrated a Tangible 3D Pong application with ZeroN as a Ping-Pong ball. In this scenario, users can play computer-enhanced pong game with a floating ball whose physical behavior is computationally programmed . Users canhit or block the movement of ZeroN to change the trajectory of the Ping-Pong ball. They canadd computational constraints in this game by Placing a physical object in this interactive space as in figure 18. While this partially implemented application demonstrates interesting challenges, it suggests a new potential infrastructure for computer entertainment, where human and computation embodied in the motion of physical objects are in the tight loopof interaction. INITIAL REFLECTION AND DISCUSSION We demonstrated our prototype to users to gather initial feedback and recruited several participants to try out each application. The purpose of this study was to evaluate our design, rather than to exemplify the practicality of each application. We further discuss several interesting unique issues that we discovered through this observation. Leaving a Physical Object in the Air In the camera path control application, users appreciated the fact that they could leave a physical camera object in the air and review and edit the trajectory in a tangible way. There were commented that latency in the electromagnet’s stability update (between users’ displacement and the electromagnet’s update of the stable position) creates confusion. In
  • 47. Page | 47 the lighting control application, a user commented that they could better discuss with a collaboratorusing a system that enables the object to be held in a position in the air. Many of participants also pointed out the issue of lateral oscillation, which we are working to improve. Interaction Legibility In the physics education application, a several users commented that not being able to see physical relationship between ‘planets’ make them harder to expect how to interact with this system, or what would happen if they touch and move the parts. Being able to actuate an object without mechanical linkages in free space allows a more degrees of freedom of movements and allows access from all orientations. On the other hand, this decreases the legibility of interaction by making the mechanical linkages invisible. In contrast a historical orrery (figure 19) machine where the movement of ‘planets’ are constrained by its mechanical connections, users can immediately understand the freedom of movement that the mechanical structure affords. One of the possible solutions to compensate this loss of legibility is to rely on graphical projection or subtle movements of the objects to indicate the constraints of the movement. Carefully choosing an application where the gain of freedom outweighs the loss of legibility was our criteria for choosing application scenarios. TECHNICAL EVALUATION Maximum Levitation Range The maximum range of magnetic levitation is limited by several factors. While our circuits canhandle higher currents than currently used, an increased maximum range is limited by the heat generated in the coils. We used a 24V power supply, from
  • 48. Page | 48 which we drew 2A. Above that power, the heat generated by the electromagnet begins to melt its form core. The current prototype can levitate up to 7.4 cm measured from the bottom of the hall-effect sensor to the center of our spherical magnet. To scale up the system, a cooling system needs to be added on top of the coil. Speed of actuation The motor used in the system can carry the electromagnet with a maximum velocity of 30.5cm/s and top accelerationof 6.1m/s. The dynamic response of ZeroN’s inertia is the main limit on acceleration. Because of the response properties of this second-order system (e.g. the electromagnet and ZeroN), larger accelerations fail to overcome ZeroN’s inertia and would lead to ZeroN being dropped. The result of experiments measuring maximum intertie shows 3.9m/s of the lateral accelerationcan drop the ZeroN. Resolution and Oscillation If we frame our system as a 3D volumetric (physical) display in which only one cluster of voxels can be turned on at a time, we need to define the resolution of the system. Our 2D linear actuators can position the electromagnet at 250,000 different positions on each axis, and there is also no theoretical limit to the resolution of vertical control. However, vertical and horizontal oscillation of the levitated object makes it difficult to define this as the true system resolution. In the current prototype, ZeroN oscillates within 1.4 cm horizontally and 0.2 cm vertically around the set position when moved. We call the regions swept by oscillation“blurry” with “focused” area at its center. Robustness of Magnetic Levitation Robust levitation is a key factorforproviding users with the sensation of an invisible mechanical connection with a fixed point in the air. We have conducted a series of experiments to measure how much strength canbe posted on ZeroN without displacing it from a stable point of suspension. For these experiments, we attached the levitated magnet to a linear spring scale that canmeasure up to 1.2N of weight and pulled it towards the direction of 0° (horizontal), 15° 30°, 45°, 60°, 75°, and 90° (vertical). The average of 5 times’ measurements is plotted in figure 20.
  • 49. Page | 49 TECHNICAL LIMITATION AND FUTURE WORK ateral oscillationwas reported as the biggest issue to correct in our application scenarios. We plan to implement satellite coils around the main electromagnet that can impose a magnetic force in a lateral direction to eliminate lateral wiggling and provide better haptic feedback. Another limitation with the current prototype is the limited vertical actuation range. This can be addressed by carefully designing the magnetic controller with better range sensing capabilities and choosing a geometry forthe electromagnet that increases the range without overheating the coil. A desirable extension is to use magnetic sensing technology with an array of hall-effect sensors in 3D tracking which would have provided more robust and low-latency object tracking without occlusion. We encountered difficulties using hall-effect sensor arrays in conjunction with our magnetic levitation system because of the strong magnetic field distortions caused by our electromagnets. We believe that this problem can be overcome in the future by subtracting magnetic field generated by electromagnets through precise calibration of dynamic magnetic field. But to avoid these difficulties in the short term, we added a vision tracking to our system prototype despite that this limits the hand input to areas that do not occlude the view of the camera. Levitating Multiple Objects While the current research was focused on identifying challenges in interacting with one levitated object, it is natural to imagine interaction with multiple objects in mid-air. A scalable solution will be using an array of solenoids. Under such setup, a magnet can be positioned at or moved to an arbitrary position between the centers of two or more solenoids by passing the necessary amount of current to eachsolenoid. It is analogous to pulling and hanging a ball with multiple invisible magnetic strings connected to the center of solenoids. However, it will be challenging to position two or more magnets within a small proximity due to magnetic field interference, or to position them on similar x, y coordinates. One approach to tackle this issue might come from levitating switchable magnets, turning them on and off to time-multiplex the influence that eachobject receives from the solenoids. We would like to leave this concept for future research. L
  • 50. Page | 50 CONCLUSION his presents the concept of 3D mid-air tangible interaction. To explore this concept, we developed a magnetic control system that can levitate and actuate a permanent magnet in a three dimensional space, combined with an optical tracking and display system that projects images on the levitating object. We extend interaction scenarios that were constrained to 2D tabletop interaction to mid-air space, and developed novel interaction techniques. Raising tabletop tangible interfaces to 3D space above the surface opens up many opportunities and leaves many interaction design challenges. The focus of to explore these interaction modalities and although the current applications demonstrate many challenges, we are encouraged by what is enabled by the current system and will continue to develop scalable mid-air tangible interfaces. We also envision that ZeroN could be extended forthe manipulation of holographic displays. When 3D display technologies mature in the future, levitated objects can be directly coupled with holographic images projected in the air. We believe that ZeroN is the beginning of an exploration of this space within the larger field of future interaction design. One could imagine interfaces where discrete objects become like 3D pixels, allowing users to create and manipulate forms with their hands. Inventor of ZeroN Jinha Lee andRehmi Post MIT Media Laboratory 75 Amherst St. Cambridge, MA 02139 USA Advisor: Hiroshi Ishii MIT Media Laboratory 75 Amherst St. Cambridge, MA 02139 USA T
  • 51. Page | 51 RECAPT about all shortly Tools are  Beyond – Collapsible Tools and Gestures forComputational Design  SpaceTop: Integrating 2D and Spatial 3D Interactions in a See-through Desktop Environment WYSIWYF –“what you see is what you feel” ZeroN: Mid-Air Tangible Interaction Enabled by Computer Controlled Magnetic Levitation Disadvantage of all tools I. After released of all product cost big factorto purchase this product. II. Signal user interface can’t use multiple use access all tools. III. Not potable for carry all tools, only Beyond relished light Wight but this tools have one Wight full camera. IV. Not User friendly only control by well knows people V. Need Popper environment this is only test in lab. VI. Only 18 or above age people access this product [this age provide by lab] “Today, we started by talking about the boundary, but if we remove this boundary, the only boundary left is our imagination.”
  • 52. Page | 52 REFERENCE Reference of Beyond – Collapsible Tools and Gestures for Computational Design [1] Ishii, H. and Ullmer, B. Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms. Proceedings of CHI’97, ACM Press, 1997. 234-241. [2] Bae, S., Balakrishnan, B. and Karan Singh, "ILoveSketch: Asnatural-as-possible sketching system for creating 3D curve models," ACM Symposium on User Interface Software and Technology 2008. [3] Oblong G-speak. http://www.oblong.com [4] Koike, H., Xinlei, C.,Nakanishi, Y., Oka, K., and Sato, Y. Twohanded drawing on augmented desk. In CHI ’02 extended abstracts on Human factors in computing systems, New York, NY, USA. 2002, 760–761, [5] P. Mistry, K. Sekiya, A. Bradshaw. Inktuitive: An Intuitive Physical Design Workspace. In the Proceedings of 4th International Conference on Intelligent Environments (IE08). 2008. [6] Ishii,H.,Underkoffler,J.,Chak,D.,Piper,B.,Ben- Joseph, E.,Yeung, L., Kanji, Z. Augmented urban planning workbench: overlaying drawings, physical models and digital simulation. In Proceedings of ISMAR. 2002. [7] Yokokohji, Y.; Hollis, R.L.; Kanade, T., Vision-based visual/haptic registration forWYSIWYF display Intelligent Robots and Systems apos, 1996. [8] Kamuro, S., Minamizawa, K., Kawakami, N., Tachi, S., Pen De Touch, International Conference on Computer Graphics and Interactive Techniques archive SIGGRAPH '09: Posters, 2009. [9] Inami, M.; Kawakami, N.; Sekiguchi, D.; Yanagida, Y.; Maeda, T.; Tachi, S. Visuo-haptic display using head-mounted projector. Virtual Reality, 2000. Proceedings. IEEE,Vol, Issue, 2000. 233– 240.
  • 53. Page | 53 [10] Plesniak, W.J.; Pappu, R.S.; Benton, S.A. Haptic holography: a primitive computational plastic. Proceedings of the IEEE Volume 91, Issue 9, Sept. 2003. 1443 – 1456. [11] FRONTDESIGN Sketch Furniture http://www.youtube.com/watch?v=8zP1em1dg5k [12] Igarashi, T., Matsuoka, S., and Tanaka, H. Teddy: a sketching interface for 3D freeformdesign. In Proc. of the 26th Annual Conference on Computer Graphics and interactive Techniques ACM Press/Addison-Wesley Publishing Co., New York, NY, 1999. 409416. [13] Igarashi, T. and Hughes, J. F. A suggestive interface for3D drawing. UIST,2001. 173-181. [14] Sensable Technologies PhanTOM. http://www.sensable.com/haptic-phantom-desktop.htm [15] Lee, J., Head Tracking Method Using Wii Remote. http://johnnylee.net/projects/wii/ [16] Aliakseyeu,D. ,Martens,J. ,Rauterberg,M. A computer support tool forthe early stages of architectural design. Interacting with Computers, v.18 n.4, July, 2006. 528-555. [17] Wang, Y., Biderman, A., Piper, Ben., Ratti, C., Ishii, Tangible User Interfaces (TUIs): A Novel Paradigm forGIS Transactions in GIS, 2004. 407–421. [18] Leightinger, D., Kumpft, Adam., Ishii, H., Relief http://tangible.media.mit.edu/project.php?recid=132 Reference of SpaceTop: Integrating 2D and Spatial 3D Interactions in a See-through Desktop Environment [1] Agarawala, A., and Balakrishnan, R., 2006. Keepin' it real: pushing the desktop metaphor with physics, piles and the pen. CHI '06, 1283–1292. [2] Benko, H., and Feiner, S. 2006. Balloonselection: A multi-finger technique foraccurate low-fatigue 3d selections. 3D UI '06, 79–86.
  • 54. Page | 54 [3] Benko, H., Ishak, E., and Feiner, S. 2005. Cross-dimensional gestural interaction techniques forhybrid immersive environments. VR ' 05, 209– 216. [4] Hachet, M., Bossavit, B., Cohé, A., and Rivière, J., 2011. Toucheo: multitouch and stereo combined in a seamless workspace. UIST '11, 587– 592. [5] Hilliges, O., Kim, D., Izadi, S., Weiss, M., and Wilson, A. 2012. HoloDesk: direct 3d interactions with a situated see- through display. CHI '12, 1283–1292. [6] Olwal, A., Lindfors, C., Gustafsson, J.,Kjellberg, T., and Mattsson, L. 2005. ASTOR: An autostereoscopic optical see- through augmented reality system. ISMAR '05, 24–27. [7] Robertson, G., Czerwinski, M., Larson, K., Robbins, D., Thiel, D., and Dantzich, M. 1998. Data mountain: using spatial memory fordocument management. UIST '98, 153–162. [8] Schmandt, C. 1983. Spatial input/display correspondence in a stereoscopic computer graphic work station. SIGGRAPH '83, 253–261. [9] Treskunov, A., Kim, S. W., Marti, S. 2011. Range Camera for Simple behind Display Interaction. IAPR MVA '11, 160–163. [10] Wilson, A. Using a depth camera as a touch sensor. ITS '10, 69–72. [11] Wilson, A. 2006. Robust computer vision-based detection of pinching for one and two-handed gesture input. UIST '06, 255–258.
  • 55. Page | 55 Reference of WYSIWYF –“what you see is what you feel” [1] http://www.ted.com/speakers/jinha_lee [2] http://tangible.media.mit.edu/ [3] http://vimeo.com/60619666 [4] http://www.asdfnews.com/#/ [5] http://leejinha.com/WYCIWYW [6] http://leejinha.com/ABOUT a. Benko, H., Ishak, E., and Feiner, S. 2005. Cross-dimensional gestural interaction techniques forhybrid immersive environments. VR ' 05, 209– 216. [7] Hachet, M., Bossavit, B., Cohé, A., and Rivière, J., 2011. Toucheo: multitouch and stereo combined in a seamless workspace. UIST '11, 587– 592. Reference of ZeroN: Mid-Air Tangible Interaction Enabled by Computer Controlled Magnetic Levitation 1. Baudisch, P., Becker, T., and Rudeck, F. 2010. Lumino: tangible building blocks based on glass fiber bundles. In ACM SIGGRAPH 2010 Emerging Technologies (SIGGRAPH '10). ACM, New York, NY, USA, , Article 16 , 1 pages. 2. Berkelman, P. J., Butler, Z. J., and Hollis, R. L., "Design of a Hemispherical Magnetic Levitation Haptic Interface Device," 1996 ASME IMECE,Atlanta, DSC-Vol. 58, pp. 483-488. 3. Grossman, T. and Balakrishnan, R. 2006. The design and evaluation of selection techniques for3D volumetric displays. In ACM UIST '06. 3-12. 4. Grossman, T. and Wigdor, D. Going deeper: a taxonomy of 3D on the tabletop. In IEEE Tabletop '07. 2007. p. 137-144.
  • 56. Page | 56 5. Hilliges, O., Izadi, S., Wilson, A. D., Hodges, S., Garcia-Mendoza, A., and Butz., A., 2009. Interactions in the air: adding further depth to interactive tab letops. In Proceedings of the 22nd annual ACM symposium on User interface software and technology (UIST '09). ACM, New York, NY 139-148. 6. Hollis, R. L. and Salcudean, S. E. 1993. Lorentz levitation technology: a new approach to fine motion robotics, teleoperation, haptic interfaces, and vibration isolation, In Proc. 6th Int’l Symposium on Robotics Research, October 2-5 1993. 7. Ishii, H. and Ullmer, B. 1997. Tangible bits: towards seamless interfaces between people, bits and atoms. In Proceedings of the CHI'97. ACM, New York, NY, 234-241. 8. Iwata,H., Yano, H., Nakaizumi, F., and Kawamura, R. 2001. Project FEELEX 9. Jorda, S. 2010. The reactable: tangible and tabletop music performance. InProceedings of the 28th of the international conference extended abstracts on Human factors in computing systems (CHI EA '10). ACM, New York, NY, USA, 2989-2994. 10. Massie, T. H. and Salisbury, K. "The PHANTOM Haptic Interface: A Device for Probing Virtual Objects." Proceedings of the ASME Winter Annual Meeting, Symposium on Haptic Interfaces forVirtual Environment and TeleoperatorSystems,1994. 11. Pangaro, G., Maynes-Aminzade, D., and Ishii, H. 2002. The actuatedworkbench: computer-controlled actuation in tabletop tangible interfaces. In Proceedings of the 15th annual ACM symposium on User interface software and technology (UIST '02). ACM, New York, NY, USA, 181-190. 12. Patten, J., Ishii, H., Hines, J., and Pangaro, G. 2001. Sensetable: a wireless object tracking platformfor tangible user interfaces. In CHI '01. ACM, New York, NY, 253-260. 13. Patten, J., Recht, B., and Ishii, H. 2006. Interactiontechniques formusical performance with tabletop tangible interfaces. In Proceedings of the 2006 ACM SIGCHI international conference on Advances in computer entertainment technology (ACE '06). ACM, New York, NY, USA,Article 27.
  • 57. Page | 57 14. Patten, J. and Ishii, H. 2007. Mechanical constraints as computational constraints in tabletop tangible interfaces. In Proceedings of the SIGCHI conference on Human factors in computing systems (CHI '07). ACM, New York, NY, USA, 809-818. 15. Piper, B., Ratti, C.,and Ishii, H., Illuminating Clay: A 3-D Tangible Interface for Landscape Analysis, Proceedings of CHI 2002, 355-364. 16. Plesniak, W. J., "Haptic holography: an early computational plastic", Ph.D. Thesis, Program in Media Arts and Sciences, Massachusetts Institute of Technology, June 2001. 17. Poupyrev, I.,Nashida, T., Maruyama, S., Rekimoto, J., and Yamaji, Y. 2004. Lumen: interactive visual and shape display for calm computing. In ACM SIGGRAPH 2004 Emerging technologies (SIGGRAPH '04), Heather ElliottFamularo (Ed.). ACM, New York, NY, USA, 17. 18. Poupyrev, I.,Nashida, T., and Okabe, M. 2007. Actuation and tangible user interfaces: the Vaucansonduck, robots, and shape displays. In Proceedings of the 1st international conference on Tangible and embedded interaction (TEI '07). ACM, New York, NY. 19. Rekimoto, J., Ullmer, B., and Oba, H. 2001. DataTiles: a modular platform formixed physical and graphical interactions. In Proceedings of the SIGCHI conference on Human factors in computing systems (CHI '01). ACM, New York, NY, USA, 269-276. 20. Rosenfeld, D., Zawadzki, M., Sudol, J., and Perlin, K. Physical objects as bidirectional user interface elements. IEEE Computer Graphics and Applications, 24(1):44–49, 2004. 21. Sugimoto, M., Kagotani, G., Kojima, M., Nii, H., Nakamura, A., and Inami, M. 2005. Augmented coliseum: display-based computing foraugmented reality inspiration computing robot. In ACM SIGGRAPH 2005 Emerging technologies (SIGGRAPH '05), Donna Cox (Ed.). ACM, New York, NY, USA, Article 1. 22. Underkoffler, J. and Ishii, H. 1999. Urp: a luminous-tangible workbench for urban planning and design. In CHI '99. ACM, New York, NY, 386-393.
  • 58. Page | 58 23. Weiss, M., Schwarz, F., Jakubowski, S., and Borchers, J. 2010. Madgets: actuating widgets on interactive tabletops. In Proceedings of the 23nd annual ACM symposium on User interface software and technology (UIST '10). ACM, New York, NY, 293-302. 24. Art+com’ Kinetic Sculpture: http://www.artcom.de/en/projects/project/detail/kinetic-sculpture/