SlideShare a Scribd company logo
1
1.INTRODUCTION
Motion capture, motion tracking, or mocap are terms used to describe the
process of recording movement and translating that movement onto a digital
model. It is used in military, entertainment, robotics. In filmmaking it refers to
recording actions of human actors, and using that information to animated digital
character models in 2D or 3D computer animation. When it includes face, fingers
and captures subtle expressions, it is often referred to as performance capture. In
motion capture sessions, movements of one or more actors are sampled many times
per second, although with most techniques (recent developments from Wet a use
images for 2D motion capture and project into 3D) motion capture records only the
movements of the actor, not his/her visual appearance. This animation data is
mapped to a 3D model so that the model performs the same actions as the actor.
This is comparable to the older technique of rotoscope, such as the 1978 The Lord
of the Rings animated film where the visual appearance of the motion of an actor
was filmed, then the film used as a guide for the frame by frame motion of a hand-
drawn animated character.
2. HISTORY OF MOTION CAPTURE
The use of motion capture to animate characters on computers is relatively
recent, it started in the 1970s, and now just beginning to spread. Motion capture
(collection of movement) is recording the movements of the human body (or any
other movement) for immediate analysis or decals. The captured information can
be as simple as catching body position in space or as complex as a capture of the
face and the deformation of the muscles. The captured motion can be exported to
2
various forms like bvh, bip, fbx etc., which can be used to animate 3d characters
in3ds max, maya, poser, iclone, blender etc, You can download free motion
capture data from this blog. Motion capture for animation is the superposition of
human movement on their virtual identities. This capture can be direct, such as the
animation arm of a virtual function of movement of an arm, or indirect, such as
that of a human hand with a more thorough as the effect of light or color. The idea
of copying human motion is of course not new. To make the most convincing
human movement, in "Snow White", Disney studios design animation film on a
film, play or real players. This method called "rotoscoping" has been used
successfully since then. In 1970, when he began to be possible to animate
characters by computer, Animationer have adapted traditional techniques,
including the "rotoscoping". Today, technology is catching is good and diverse, we
can classify them into three broad categories: Mechanical motion capture The
optical motion capture The magnetic motion capture Now although this technique
is effective, it still contains some problems (weight, cost, ...). But against any doubt
that the motion capture will become one of the basic tools of animation have
adopted traditional techniques, including the “mocap”
Today, technology is catching is good and diverse, we can classify into three broad
categories:
 Mechanical motion capture
 The optical motion capture
 The magnetic motion capture
Now although this technique is effective, it still contains some problems. but
against any doubt that the motion capture will become one of the basic tool of
animation
3
3. TYPES OF MOTION CAPTURES
3.1 MECHANICAL MOTION CAPTURE
This technique of motion capture is achieved through the use of an
exoskeleton. Each joint is then connected to an angular encoder. The value of
movement of each encoder (rotation etc....) is recorded by a computer that by
knowing the relative position encoders (and therefore joints) can rebuild these
movements on the screen using software. An offset is applied to each encoder,
because it is very difficult to match exactly their position with that of the real
relationship (and especially in the case of human movements).
3.1.1.ADVANTAGES AND DISADVANTAGES
This technique offers high precision and it has the advantage of not being
influenced by external factors (such as quality or the number of cameras for optical
mocap).But the catch is limited by mechanical constraints related to the
implementation of the encoders and the exoskeleton. It should be noted that the
exoskeleton generally use wired connections to connect the encoders to the
computer. For example, there is much more difficult to move with a fairly heavy
exoskeleton and connected to a large number of simple sons with small reflective
4
spheres: the freedom of movement is rather limited. The accuracy of reproduction
of the movement depends on the position encoders and modeling of the skeleton. It
must match the size of the exoskeleton at each morphology. The big disadvantage
comes from the coders themselves because if they are of great precision between
them it cannot move the object to capture in a so true. In effect, then use the
methods of optical positioning to place the animation in a decor. Finally, each
object to animate to need an exoskeleton over it is quite complicated to measure
the interaction of several exoskeleton. Thereby bringing about a scene involving
several people will be very difficult to implement.
3.2MAGNETIC MOTION CAPTURE
The magnetic motion capture Magnetic motion capture is done through a field of
electro-Magenta is introduced in which sensors are coils of sensors electriques. Les
son are represented on a place mark in 3 axes x, y, z .To determine their position
on the capture field disturbance created by a son through an antenna, then we can
know its orientation.
3.2.1 ADVANTAGES AND DISADVANTAGES
5
The advantage of this method is that data captured is accurate and no further
calculations, excluding from the calculation of position, is useful in handling. But
any metal object disturbs the magnetic field and distorts the data.
3.3OPTICALMOTION CAPTURE
The capture is based on optical shooting several synchronized cameras, the
synthesis of coordinates (x, y) of the same object from different angles, allows to
deduce the cordinates (x, y, z). This method involves the consideration of complex
problems such as optical parallax, distortion lens used, etc.. The signal thus
undergoes many interpolations. However, a correct calibration of these parameters
allows a high accuracy of data collected.
The operating principle is similar to radar: the cameras emit radiation
(usually red and / or infrared), reflected by the markers (whose surface is
composed of ultra-reflecting material) and then returned to the same cameras.
These are sensitive to one type of wavelength and viewing the markers in the form
of white spots videos (or grayscale for the latest cameras). Checking. the
information of each camera (2 cameras therefore minimum) to determine the
position of the marker in the virtual space.
6
4. METHODS AND SYSTEM
Motion tracking or motion capture started as a photogrammetric analysis tool in
biomechanics research in the 1970s and 1980s, and expanded into education,
training, sports and recently computer animation for television, cinema and video
games as the technology matured. A performer wears markers near each joint to
identify the motion by the positions or angles between the markers. Acoustic,
inertial, LED, magnetic or reflective markers, or combinations of any of these, are
tracked, optimally at least two times the frequency rate of the desired motion, to
sub millimeter positions
4.1 OPTICAL SYSTEMS
Optical systems utilize data captured from image sensors to triangulate the
3D position of a subject between one or more cameras calibrated to provide
overlapping projections. Data acquisition is traditionally implemented using
special markers attached to an actor; however, more recent systems are able to
generate accurate data by tracking surface features identified dynamically for
each particular subject. Tracking a large number of performers or expanding the
capture area is accomplished by the addition of more cameras. These systems
produce data with3 degrees of freedom for each marker, and rotational
information must be inferred from the relative orientation of three or more
markers; for instance shoulder, elbow and wrist markers providing the angle of
the elbow.
4.1.1 PASSIVE MARKERS
7
A dancer wearing a suit used in an optical motion capture system.
Figure 4.2
Several markers are placed at specific points on an actor’s face during facial
optical motion capture.
Passive optical system use markers coated with a retro reflective material to
reflect light back that is generated near the cameras lens. The cameras threshold
can be adjusted so only the bright reflective markers will be sampled, ignoring
skin and fabric.
The centroid of the marker is estimated as a position within the 2 dimensional
image that is captured. The grayscale value of each pixel can be used to provide
sub-pixel accuracy by finding the centroid of the Gaussian.
8
An object with markers attached at known positions is used to calibrate the
cameras and obtain their positions and the lens distortion of each camera is
measured. Providing two calibrated cameras see a marker, a 3 dimensional fix
can be obtained. Typically a system will consist of around 6 to 24 cameras.
Systems of over three hundred cameras exist to try to reduce marker swap.
Extra cameras are required for full coverage around the capture subject and
multiple subjects.
Vendors have constraint software to reduce problems from marker swapping
since all markers appear identical. Unlike active marker systems and magnetic
systems, passive systems do not require the user to wear wires or electronic
equipment. Instead, hundreds of rubber balls are attached with reflective tape,
which needs to be replaced periodically. The markers are usually attached
directly to the skin (as in biomechanics), or they are velcroed to a performer
wearing a full body spandex/lycra suit designed specifically for motion capture.
This type of system can capture large numbers of markers at frame rates as high
as 2000fps. The frame rate for a given system is often balanced between
resolution and speed: a 4-megapixel system normally runs at370 hertz, but can
reduce the resolution to .3 megapixels and then run at 2000 hertz. Typical
systems are $100,000 for 4-megapixel 360-hertz systems, and $50,000 for .3-
megapixel 120-hertz systems
4.1.2 ACTIVE MARKER
Active optical systems triangulate positions by illuminating one LED at a time
very quickly or multiple LEDs with software to identify them by their relative
positions, somewhat akin to celestial navigation. Rather than reflecting light
back that is generated externally, the markers themselves are powered to emit
9
their own light. Since Inverse Square law provides 1/4 the power at 2 times the
distance, this can increase the distances and volume for capture.
The TV series ("Star gate SG1") episode was produced using an active optical
system for the VFX. The actor had to walk around props that would make
motion capture difficult for other non-active optical systems.
ILM used active Markers in Van Helsing to allow capture of the Harpies on
very large sets. The power to each marker can be provided sequentially in phase
with the capture system providing a unique identification of each marker for a
given capture frame at a cost to the resultant frame rate. The ability to identify
each marker in this manner is useful in real time applications. The alternative
method of identifying markers is to do it algorithmically requiring extra
processing of the data.
4.1.3 TIME MODULATED ACTIVE MARKER
Figure 4.3
A high-resolution active marker system with 3,600 × 3,600 resolution at 480
hertz providing real time sub millimeter positions.
10
Active marker systems can further be refined by strobing one marker on at a
time, or tracking multiple markers over time and modulating the amplitude or
pulse width to provide marker ID.12 megapixel spatial resolution modulated
systems show more subtle movements than 4megapixel optical systems by
having both High resolution and high speed. These motion capture systems are
typically under $50,000 for an eight camera, 12 megapixel spatial resolution
480 hertz system with one actor.
Figure 4.4
IR sensors can compute their location when lit by mobile multi-LED emitters,
e.g. in a moving car. With Id per marker, these sensor tags can be worn under
clothing and tracked at 500 Hz in broad daylight
4.1.4 SEMI-PASSIVE IMPERCEPTIBLE MARKER
One can reverse the traditional approach based on high speed cameras.
Systems use in expensive multi-LED high speed projectors. The specially built
multi-LED IR projectors optically encode the space. Instead of retro-reflective
or active light emitting diode (LED) markers, the system uses photosensitive
marker tags to decode the optical signals. By attaching tags with photo sensors
to scene points, the tags can compute not only their own locations of each point,
but also their own orientation, incident illumination, and reflectance.
11
These tracking tags that work in natural lighting conditions and can be
imperceptibly embedded in attire or other objects. The system supports an
unlimited number of tags in a scene, with each tag uniquely identified to
eliminate marker reacquisition issues. Since the system eliminates a high speed
camera and the corresponding high-speed image stream, it requires significantly
lower data bandwidth. The tags also provide incident illumination data which
can be used to match scene lighting when inserting synthetic elements. The
technique appears ideal for on-set motion capture or real-time broadcasting of
virtual sets but has yet to be proven.
4.1.5 MARKER LESS
Emerging techniques and research in computer vision are leading to the rapid
development of the marker less approach to motion capture. Marker less
systems such as those developed at Stanford, University of Maryland, MIT, and
Max Planck Institute, do not require subjects to wear special equipment for
tracking. Special computer algorithms are designed to allow the system to
analyze multiple streams of optical input and identify human forms, breaking
them down into constituent parts for tracking. Applications of this technology
extend deeply into popular imagination about the future of computing
technology. Several commercial solutions for marker less motion capture have
also been introduced. Products currently under development include
Microsoft’s Kinect system for PC and console systems
4.2 NON-OPTICAL SYSTEMS
4.2.1 INERTIAL SYSTEMS
12
Inertial Motion Capture technology is based on miniature inertial sensors,
biomechanical models and sensor fusion algorithms. The motion data of the
inertial sensors (inertial guidance system) is often transmitted wirelessly to a
computer, where the motion is recorded or viewed. Most inertial systems use
gyroscopes to measure rotational rates. These rotations are translated to a skeleton
in the software. Much like optical markers, the more gyros the more natural the
data. No external cameras, emitters or markers are needed for relative motions.
Inertial mocap systems capture the full six degrees of freedom body motion of a
human in real-time. Benefits of using Inertial systems include: no solving,
portability, and large capture areas. Disadvantages include lower positional
accuracy and positional drift which can compound over time.
These systems are similar to the Wii controllers but are more sensitive and have
greater resolution and update rates. They can accurately measure the direction to
the ground to within a degree. The popularity of inertial systems is rising amongst
independent game developers, mainly because of the quick and easy set up
resulting in a fast pipeline. A range of suits are now available from various
manufacturers and base prices range from $25,000 to $80,000 USD
4.2.2 MECHANICALMOTION
Mechanical motion capture systems directly track body joint angles and are often
referred to as exo-skeleton motion capture systems, due to the way the sensors are
attached to the body. A performer attaches the skeletal-like structure to their body
and as they move so do the articulated mechanical parts, measuring the performer’s
relative motion. Mechanical motion capture systems are real-time, relatively low-
cost, free-of-occlusion, and wireless (untethered) systems that have unlimited
capture volume. Typically, they are rigid structures of jointed, straight metalor
13
plastic rods linked together with potentiometers that articulate at the joints of the
body. These suits tend to be in the $25,000 to $75,000 range plus an external
absolute positioning system
4.2.3 MAGNETIC SYSTEMS
Magnetic systems calculate position and orientation by the relative magnetic flux
of three orthogonal coils on both the transmitter and each receiver. The relative
intensity of the voltage or current of the three coils allows these systems to
calculate both range and orientation by meticulously mapping the tracking volume.
The sensor output is 6DOF, which provides useful results obtained with two-thirds
the number of markers required in optical systems; one on upper arm and one on
lower arm for elbow position and angle. The markers are not occluded by
nonmetallic objects but are susceptible to magnetic and electrical interference from
metal objects in the environment, like rebar (steel reinforcing bars in concrete) or
wiring, which affect the magnetic field, and electrical sources such as monitors,
lights, cables and computers. The sensor response is nonlinear, especially toward
edges of the capture area. The wiring from the sensors tends to preclude extreme
performance movements. The capture volumes for magnetic systems are
dramatically smaller than they are for optical systems. With the magnetic systems,
there is adistinction between “AC” and “DC” systems: one uses square pulses, the
other uses sine wave pulse
5. HUMAN MOCAP
The science of human motion analysis is fascinating because of its highly
interdisciplinary nature and wide range of applications. Histories of science usually
begin with the ancient Greeks, who first left a record of human inquiry concerning
14
the nature of the world in relationship to our powers of perception. Aristotle (384-
322 B.C.) might be considered the first bio mechanician Hewrote the book called
’De Motu Animalium’ - On the Movement of Animals. He not only saw . animals’
bodies as mechanical systems, but pursued such questions as the physiological
difference between imagining performing an action and actually doing it.
Figure 4.5
Nearly two thousand years later, in his famous anatomic drawings, Leonardo da
Vinci (1452-1519) sought to describe the mechanics of standing, walking up and
down hill, rising from a sitting position, and jumping. Galileo (1564-1643)
followed a hundred years later with some of the earliest attempts to mathematically
analyze physiologic function. Building on the work of Galilei, Borelli (1608-1679)
figured out the forces required for equilibrium in various joints of the human body
well before Newton published the laws of motion. He also determined the position
of the human center of gravity, calculated and measured inspired and expired air
volumes, and showed that inspiration is muscle-driven and expiration is due to
tissue elasticity. The early work of these pioneers of biomechanics was followed
up by Newton (1642-1727),Bernoulli (1700-1782), Euler (1707-1783), Poiseuille
(1799-1869), Young (1773-1829), and others of equal fame. Muybridge (1830-
1904) was the first photographer to dissect human and animal motion (see figure at
15
heading human motion analysis). This technique was first used scientifically by
Marey (1830-1904), who correlated ground reaction forces with movement and
pioneered modern motion analysis. In the 20th century, many researchers and
(biomedical)engineers contributed to an increasing knowledge of human
kinematics and kinetics. This paper will give a short overview of the technologies
used in these fields
5.1 Human motion analysis
Many different disciplines use motion analysis systems to capture movement and
posture of the human body. Basic scientists seek a better understanding of the
mechanisms that are used to translate muscular contractions about articulating
joints into functional accomplishment, e.g. walking. Increasingly, researchers
endeavor to better appreciate the relationship between the human motor control
system and gait dynamics
Figure 5.1 Figure 5.2
In the realm of clinical gait analysis, medical professionals apply an evolving
knowledge base in the interpretation of the walking patterns of impaired
ambulators for the planning of treatment protocols, e.g. orthotic prescription
and surgical intervention and allow the clinician to determine the extent to
which an individual’s gait pattern has been affected by an already diagnosed
16
disorder. With respect to sports, athletes and their coaches use motion analysis
techniques in acease less quest for improvements in performance while
avoiding injury. The use of motion capture for computer character animation or
virtual reality (VR) applications is relatively new. The information captured can
be as general as the position of the body in space or as complex as the
deformations of the face and muscle masses. The mapping can be direct, such
as human arm motion controlling a character’s arm motion, or indirect, such as
human hand and finger patterns controlling a character’s skin color or
emotional state. The idea of copying human motion for animated characters is,
of course, not new. To get convincing motion for the human characters in Snow
White, Disney studios traced animation over film footage of live actors playing
out the scenes. This method, called rotoscoping, has been successfully used for
human characters. In thelate’70’s, when it began to be feasible to animate
characters by computer, animators adapted traditional techniques, including
rotoscoping.
Generally, motion analysis data collection protocols, measurement precision,
and data reduction models have been developed to meet the requirements for
their specific settings. For example, sport assessments generally require higher
data acquisition rates because of increased velocities compared to normal
walking. In VR applications, real-time tracking is essential for a realistic
experience of the user, so the time lag should be kept to a minimum. Years of
technological development has resulted into many systems can be categorized
in mechanical, optical, magnetic, acoustic and inertial trackers. The human
body is often considered as a system of rigid links connected by joints. Human
body parts are not actually rigid structures, but they are customarily treated as
such during studies of human motion. Mechanical trackers utilize rigid or
17
flexible goniometers which are worn by the user. Goniometers within the
skeleton linkages have a general correspondence to the joints of the user. These
angle measuring devices provide joint angle data to kinematic algorithms which
are used to determine body posture. Attachment of the body-based linkages as
well as the positioning of the goniometers present several problems. The soft
tissue of the body allows the position of the linkages relative to the body to
change as motion occurs. Even without these changes, alignment of the
goniometer with body joints is difficult. This is specifically true for multiple
degree of freedom (DOF) joints, like the shoulder. Due to variations in
anthropometric measurements, body-based systems must be recalibrated for
each user.
Figure 5.3
Optical sensing encompasses a large and varying collection of technologies.
Image-based systems determine position by using multiple cameras to track
predetermined points (markers) on the subject’s body segments, aligned with
specific bony landmarks. Position is estimated through the use of multiple 2D
images of the working volume. Stereometric techniques correlate common
tracking points on the tracked objects in each image and use this information
along with knowledge concerning the relationship between each of the images
and camera parameters to calculate position. The markers can either be passive
(reflective) or active (light emitting).Reflective systems use infrared (IR) LED’s
18
mounted around the camera lens, along with IR pass filters placed over the
camera lens and measure the light reflected from the markers. Optical systems
based on pulsed-LED’s measure the infrared light emitted by the LED’s placed
on the body segments. Also camera tracking of natural objects without the aid
of markers is possible, but in general less accurate. It is largely based on
computer vision techniques of pattern recognition and often requires high
computational resources. Structured light systems use lasers or beamed light to
create a plane of light that is swept across the image. They are more appropriate
for mapping applications than dynamic tracking of human body motion. Optical
systems suffer from occlusion (line of sight) problems whenever a required
light path is blocked. Interference from other light sources or reflections may
also be a problem which can result in so- called ghost markers.
Figure 5.4
Magnetic motion capture systems utilize sensors placed on the body to
measure the low- frequency magnetic fields generated by a transmitter source.
The transmitter source is constructed of three perpendicular coils that emit a
magnetic field when a current is applied. The current is sent to these coils in a
sequence that creates three mutually perpendicular fields during each
measurement cycle. The 3D sensors measure the strength of those fields which
is proportional to the distance of each coil from the field emitter assembly. The
19
sensors and source are connected to a processor that calculates position and
orientation of each sensor based on its nine measured field values. Magnetic
systems do not suffer from line of sight problems because the human body is
transparent for the used magnetic fields. However, the shortcomings of
magnetic tracking systems are directly related to the physical characteristics of
magnetic fields. Magnetic fields decrease in power rapidly as the distance from
the generating source increases and so they can easily be disturbed by (Ferro)
magnetic materials within the measurement volume. Acoustic tracking systems
use ultrasonic pulses and can determine position through either time-of-flight
of the pulses and triangulation or phase coherence. Both outside-in and inside-
out implementations are possible, which means the transmitter can either be
placed on a body segment or fixed in the measurement volume. The physics of
sound limit the accuracy, update rate and range of acoustic tracking systems. A
clear line of sight must be maintained and tracking can be disturbed by
reflections of the sound.
Inertial sensors use the property of bodies to maintain constant translational
and rotational velocity, unless disturbed by forces or torques, respectively. The
vestibular system, located in the inner ear, is a biological 3D inertial sensor. It
can sense angular motion as well as linear acceleration of the head. The
vestibular system is important for maintaining balance and stabilization of the
eyes relative to the environment. Practical inertial tracking is made possible by
advances in miniaturized and micro machined sensor technologies, particularly
in silicon accelerometers and rate sensors. A rate gyroscope measures angular
velocity, and if integrated over time provides the change in angle with respect
to an initially known angle.
20
Figure 5.5
An accelerometer measures accelerations, including gravitational
acceleration g. If the angle of the sensor with respect to the vertical is known,
the gravity component can be removed and by numerical integration, velocity
and position can be determined. Noise and bias errors associated with small and
inexpensive sensors make it impractical to track orientation and position for
longtime periods if no compensation is applied. By combining the signals from
the inertial sensors with aiding/complementary sensors and using knowledge
about their signal characteristics, drift and other errors can be minimized.
5.2 AMBULATARY TRACKING
Commercial optical systems such as Vicon (reflective markers) or Optotrak
(active markers) are often considered as a standard’ in human motion analysis.
Although these systems provide accurate position information, there are some
important limitations. The most important factors are the high costs, occlusion
problems and limited measurement volume. The use of a specialized laboratory
with fixed equipment impedes many applications, like monitoring of daily life
activities, control of prosthetics or assessment of workload in ergonomic
studies. In the past few years, the health care system trend toward early
discharge to monitor and train patients in their own environment. This has
21
promoted a large development of non-invasive portable and wearable systems.
Inertial sensors have been successfully applied for such clinical measurements
outside the lab. Moreover, it has opened many possibilities to capture motion
data for athletes oranimation purposes without the need for a studio.
The orientation obtained by present-day micro machined gyroscopes
typically shows an increasing error of degrees per minute. For accurate and drift
free orientation estimation Xsenshas developed an algorithm to combine the
signals from 3D gyroscopes, accelerometers and magnetometers.
Accelerometers are used to determine the direction of the local vertical by
sensing acceleration due to gravity. Magnetic sensors provide stability in the
horizontal plane by sensing the direction of the earth magnetic field like a
compass. Data from these complementary sensors are used to eliminate drift by
continuous correction of the orientation obtained by angular rate sensor data.
This combination is also known as an attitude and heading reference
system(AHRS).
For human motion tracking, the inertial motion trackers are placed on each
body segment to be tracked. The inertial motion trackers give absolute
orientation estimates which are also used to calculate the 3D linear
accelerations in world coordinates which in turn give translation estimates of
the body segments.
22
Figure 5.6
Since the rotation from sensor to body segment and its position with respect
to the axes of rotation are initially unknown, a calibration procedure is
necessary. An advanced articulated body model constraints the movements of
segments with respect to each other and eliminates any integration drift.
5.3 INERTIAL SENSORS
A single axis accelerometer consists of a mass, suspended by a spring in a
housing. Springs(within their linear region) are governed by a physical principle
known as Hooke’s law. Hooke’ slaw states that a spring will exhibit a restoring
force which is proportional to the amount it has been expanded or compressed.
Specifically, F = kx, where k is the constant of proportionality between
displacement x and force F. The other important physical principle is that of
Newton’s second law of motion which states that a force operating on a mass
which is accelerated will exhibit a force with a magnitude F = ma. This force
causes the mass to either compress or expand the spring under the constraint
that F = ma = kx. Hence an acceleration a will cause the mass to be displaced
by x = ma/k or, if we observe a displacement of x, we know the mass has
undergone an acceleration of a = kx/m. In this way, the problem of measuring
acceleration has been turned into one of measuring the displacement of a mass
connected to a spring. In order to measure multiple axes of acceleration, this
system needs to be duplicated along each of the required axes. Gyroscopes are
instruments that are used to measure angular motion. There are two broad
categories: (1) mechanical gyroscopes and (2) optical gyroscopes. Within both
of these categories, there are many different types available.
23
Figure 5.7
The first mechanical gyroscope was built by Foucault in 1852, as a
gimbaled wheel that stayed fixed in space due to angular momentum while the
platform rotated around it. Mechanical gyroscopes operate on the basis of
conservation of angular momentum by sensing the change indirection of an
angular momentum. According to Newton’s second law, the angular
momentum of a body will remain unchanged unless it is acted upon by a torque.
The fundamental equation describing the behavior of the gyroscope is
where the vectors tau and L are, the torque on the gyroscope and its angular
momentum, respectively . The scalar I is its moment of inertia, the vector
omega is its angular velocity, and the vector alpha is its angular acceleration.
24
Figure 5.8
Gimbaled and laser gyroscopes are not suitable for human motion analysis
due to their large size and high costs. Over the last few years, micro
electromechanical machined (MEMS) inertial sensors have become more
available. Vibrating mass gyroscopes are small, inexpensive and have low
power requirements, making them ideal for human movement analysis. A
vibrating element(vibrating resonator), when rotated, is subjected to the Coriolis
effect that causes secondary vibration orthogonal to the original vibrating
direction. By sensing the secondary vibration, the rate of turn can be detected.
The Coriolis force is given by:
where m is the mass, v the momentary speed of the mass relative to the
moving object to which it is attached and omega the angular velocity of that
object. Various micro machined geometries are available, of which many use
the piezo-electric effect for vibration exert and detection
5.4 SENSOR FUSION
The traditional application area of inertial sensors is navigation as well as
guidance and stabilization of military systems. Position, velocity and attitude
are obtained using accurate, but large gyroscopes and accelerometers, in
combination with other measurement devices such as GPS, radar or a baro
25
altimeter. Generally, signals from these devices are fused using a Kalman filter
to obtain quantities of interest (see figure below). The Kalman filter is useful
for combining data from several different indirect and noisy measurements. It
weights the sources of information appropriately with knowledge about the
signal characteristics based on their models to make the best use of all the data
from each of the sensors. There is no perfect sensor; each type has its strong
and weak points. The idea behind sensor fusion is that characteristics of one
type of sensor are used to overcome the limitations of another sensor. For
example, magnetic sensors are used as a reference to prevent the gyroscope
integration drift about the vertical axis in the orientation estimates. However,
iron and other magnetic materials will disturb the local magnetic field and as a
consequence, the orientation estimate. The spatial and temporal features of
magnetic disturbances will be different from those related to gyroscope drift
errors.
Figure 5.9
This figure: Complementary Kalman filter structure for position and
orientation estimates combining inertial and aiding measurements. The signals
from the IMU (a − g and w) provide the input for the INS. By double
integration of the accelerations, the position is estimated at a high frequency. At
a feasible lower frequency, the aiding system provides position estimates. The
26
difference between the inertial and aiding estimates is delivered to the Kalman
filter. Based on the system model the Kalman filters estimates the propagation
of the errors. The outputs of the filter are fed back to correct the position,
velocity, acceleration and orientation estimates. Using this a priori knowledge,
the effects of both drift and disturbances can be minimized. The inertial sensors
of the inertial navigation system (INS) can be mounted on vehicles in such a
way that they stay leveled and pointed in a fixed direction. This system relies on
a set of gimbals and sensors attached on three axes to monitor the angles at all
times. Another type of INS is the strap down system that eliminates the use of
gimbals which is >suitable for human motion analysis. In this case, the gyros
and accelerometers are mounted directly to the structure of the vehicle or
strapped on the body segment. The measurements are made in reference to the
local axes of roll, pitch, and heading (or yaw). The clinical reference system
provides anatomically meaningful definitions of main segmental movements
(e.g. flexion-extension, abduction-adduction or supination-pronation).
Figure 5.10 Figure 5.11
6. ADVANTAGES
Motion capture offers several advantages over traditional computer
animation of a 3D model:
27
 More rapid, even real time results can be obtained. In entertainment
applications this can reduce the costs of key frame-based animation. For
example: Hand Over
 The amount of work does not vary with the complexity or length of the
performance to the same degree as when using traditional techniques.
This allows many tests to be done with different styles or deliveries.
 Complex movement and realistic physical interactions such as
secondary motions, weight and exchange of forces can be easily
recreated in a physically accurate manner.
 The amount of animation data that can be produced within a given time
is extremely large when compared to traditional animation techniques.
This contributes to both cost effectiveness and meeting production
deadlines.
 Potential for free software and third party solutions reducing its costs
7. DISADVANTAGES
 Specific hardware and special programs are required to obtain and
process the data.
 The cost of the software, equipment and personnel required can
potentially be prohibitive for small productions.
 The capture system may have specific requirements for the space it is
operated in, depending on camera field of view or magnetic distortion.
 When problems occur it is easier to reshoot the scene rather than trying
to manipulate the data. Only a few systems allow real time viewing of
the data to decide if the take needs to be redone.
28
 The initial results are limited to what can be performed within the capture
volume without extra editing of the data.
 Movement that does not follow the laws of physics generally cannot be
captured.
 Traditional animation techniques, such as added emphasis on
anticipation and follow through, secondary motion or manipulating the
shape of the character, as with squash and stretch animation techniques,
must be added later.
 If the computer model has different proportions from the capture subject,
artifacts may occur. For example, if a cartoon character has large, over-
sized hands, these may intersect the characters body if the human
performer is not careful with their physical motion.
8. APPLICATIONS
 Video games often use motion capture to animate athletes, martial
artists, and other in- game characters. This has been done since the Atari
Jaguar CD-based game Highlander: The Last of the MacLeods, released
in 1995.
 Movies use motion capture for CG effects, in some cases replacing
traditional cel animation, and for completely computer-generated
creatures, such as Jar Binks, Gollum, The Mummy, King Kong, and the
Navi from the film Avatar.
 Sinbad: Beyond the Veil of Mists was the first movie made primarily
with motion capture, although many character animators also worked on
the film.
29
 In producing entire feature films with computer animation, the industry
is currently split between studios that use motion capture, and studios
that do not. Out of the three nominees for the 2006 Academy Award for
Best Animated Feature, two of the nominees (Monster House and the
winner Happy Feet) used motion capture, and only Disney·Pixars Cars
was animated without motion capture. In the ending credits of Pixars
film Ratatouille, a stamp appears labeling the film as "100% Pure
Animation -- No Motion Capture!".
 Motion capture has begun to be used extensively to produce films which
attempt to simulate or approximate the look of live-action cinema, with
nearly photorealistic digital character models. The Polar Express used
motion capture to allow Tom Hanks to perform as several distinct digital
characters (in which he also provided the voices). The 2007 adaptation
of the saga Beowulf animated digital characters whose appearances were
based in part on the actors who provided their motions and voices. James
Cameron’s Avatar used this technique to create the Navi that inhabit
Pandora. The Walt Disney Company has announced that it will distribute
Robert Zemeckiss A Christmas Carol and Tim Burtons Alice in
Wonderland using this technique. Disney has also acquired Zemeckis
Image Movers Digital that produces motion capture films.
 Television series produced entirely with motion capture animation
include Laflaque in Canada, Sprookjesboom and Cafe de Wereld in The
Netherlands, and Head cases in the UK.
 Virtual Reality and Augmented Reality allow users to interact with
digital content in real- time. This can be useful for training simulations,
visual perception tests, or performing a virtual walk-throughs in a 3D
30
environment. Motion capture technology is frequently used in digital
puppetry systems to drive computer generated characters in real-time.
 Gait analysis is the major application of motion capture in clinical
medicine. Techniques allow clinicians to evaluate human motion across
several biometric factors, often while streaming this information live into
analytical software.
 During the filming of James Cameron’s Avatar all of the scenes
involving this process where directed in real time using a screen which
converted the actor setup with the motion costume into what they would
look like in the movie making it easier for Cameron to direct the movie
as it would be seen by the viewer. This method allowed Cameron to view
the scenes from many more views and angles not possible from a pre-
rendered animation. He was so proud of his pioneering methods he even
invited Steven Spielberg and George Lucas on set to view him in action.
9. CONCLUSION
Although the motion capture requires some technical means, we can quite
get what to do it yourself at home in a reasonable cost that can make your own
short film.
Motion capture is a major step forward in the field of cinema as you can
reprocess the image in a more simple, in fact, it is easier to modify an image
captured a classic scene, all although this is too expensive. But it is also a major
asset in medicine, for example, it can be used to measure the benefit of a
transaction via a recording of the movement of the patient before and after the
operation (such as in the case of the application prosthesis, or simply at a
medical classic (in the future perhaps).
31
10. REFERENCES
• http://en.wikipedia.org/wiki/Motion_capture
•
http://www.siggraph.org/education/materials/HyperGraph/animation/char
acter_animation /motion capture/history1.htm
• http://www.metamotion.com/motion-capture/motion-capture.htm
• http://accad.osu.edu/research/mocap/mocap_home.htm
•
http://instruct1.cit.cornell.edu/courses/ee476/FinalProjects/s2005/Motion
_Capture_KHY 6_DCL34/Motion_Capture.htm
• http://web.mit.edu/comm-forum/papers/furniss.html
• http://www.mastudios.com/index.html
•http://www.youtube.com/watch?v=V0yT8mwg9n
•http://www.postmagazine.com/ME2/dirmod.asp?sid=&nm=&type=Publis
hing&mod=Pu
blications::Article&mid=8F3A7027421841978F18BE895F87F791&tier=4&id
=C715B81 DD6674D62BD666D304D2E8D0B
32

More Related Content

What's hot

Real Time Object Dectection using machine learning
Real Time Object Dectection using machine learningReal Time Object Dectection using machine learning
Real Time Object Dectection using machine learning
pratik pratyay
 
Object tracking
Object trackingObject tracking
Object tracking
ahmadamin636
 
Object tracking
Object trackingObject tracking
Object tracking
Sri vidhya k
 
Motion capture technology
Motion capture technologyMotion capture technology
Motion capture technology
Anvesh Ranga
 
Motion capture technology
Motion capture technologyMotion capture technology
Motion capture technology
Parvez Hassan
 
Motion Capture Technology Computer Graphics
Motion Capture Technology Computer GraphicsMotion Capture Technology Computer Graphics
Motion Capture Technology Computer Graphics
Rohan Patel
 
Introduction to motion capture
Introduction to motion captureIntroduction to motion capture
Introduction to motion capture
Hanafikktmr
 
MOTION CAPTURE TECHNOLOGY
MOTION CAPTURE TECHNOLOGYMOTION CAPTURE TECHNOLOGY
MOTION CAPTURE TECHNOLOGY
Shaik Tanveer
 
Motion capture by Rj
Motion capture by RjMotion capture by Rj
Motion capture technology
Motion capture technologyMotion capture technology
Motion capture technology
ARUN S L
 
HUMAN MOTION DETECTION AND TRACKING FOR VIDEO SURVEILLANCE
HUMAN MOTION DETECTION AND TRACKING FOR VIDEO SURVEILLANCEHUMAN MOTION DETECTION AND TRACKING FOR VIDEO SURVEILLANCE
HUMAN MOTION DETECTION AND TRACKING FOR VIDEO SURVEILLANCE
Aswinraj Manickam
 
Object detection
Object detectionObject detection
Object detection
ROUSHAN RAJ KUMAR
 
Object tracking a survey
Object tracking a surveyObject tracking a survey
Object tracking a survey
Haseeb Hassan
 
Motion Capture
Motion CaptureMotion Capture
Motion Capture
aswathisuresh
 
Helmet and License Plate Detection using Machine Learning
Helmet and License Plate Detection using Machine LearningHelmet and License Plate Detection using Machine Learning
Helmet and License Plate Detection using Machine Learning
IRJET Journal
 
Background subtraction
Background subtractionBackground subtraction
Background subtraction
Raviraj singh shekhawat
 
Visual Object Tracking: review
Visual Object Tracking: reviewVisual Object Tracking: review
Visual Object Tracking: review
Dmytro Mishkin
 
BEV Joint Detection and Segmentation
BEV Joint Detection and SegmentationBEV Joint Detection and Segmentation
BEV Joint Detection and Segmentation
Yu Huang
 
3D Perception for Autonomous Driving - Datasets and Algorithms -
3D Perception for Autonomous Driving - Datasets and Algorithms -3D Perception for Autonomous Driving - Datasets and Algorithms -
3D Perception for Autonomous Driving - Datasets and Algorithms -
Kazuyuki Miyazawa
 
The motion estimation
The motion estimationThe motion estimation
The motion estimation
sakshij91
 

What's hot (20)

Real Time Object Dectection using machine learning
Real Time Object Dectection using machine learningReal Time Object Dectection using machine learning
Real Time Object Dectection using machine learning
 
Object tracking
Object trackingObject tracking
Object tracking
 
Object tracking
Object trackingObject tracking
Object tracking
 
Motion capture technology
Motion capture technologyMotion capture technology
Motion capture technology
 
Motion capture technology
Motion capture technologyMotion capture technology
Motion capture technology
 
Motion Capture Technology Computer Graphics
Motion Capture Technology Computer GraphicsMotion Capture Technology Computer Graphics
Motion Capture Technology Computer Graphics
 
Introduction to motion capture
Introduction to motion captureIntroduction to motion capture
Introduction to motion capture
 
MOTION CAPTURE TECHNOLOGY
MOTION CAPTURE TECHNOLOGYMOTION CAPTURE TECHNOLOGY
MOTION CAPTURE TECHNOLOGY
 
Motion capture by Rj
Motion capture by RjMotion capture by Rj
Motion capture by Rj
 
Motion capture technology
Motion capture technologyMotion capture technology
Motion capture technology
 
HUMAN MOTION DETECTION AND TRACKING FOR VIDEO SURVEILLANCE
HUMAN MOTION DETECTION AND TRACKING FOR VIDEO SURVEILLANCEHUMAN MOTION DETECTION AND TRACKING FOR VIDEO SURVEILLANCE
HUMAN MOTION DETECTION AND TRACKING FOR VIDEO SURVEILLANCE
 
Object detection
Object detectionObject detection
Object detection
 
Object tracking a survey
Object tracking a surveyObject tracking a survey
Object tracking a survey
 
Motion Capture
Motion CaptureMotion Capture
Motion Capture
 
Helmet and License Plate Detection using Machine Learning
Helmet and License Plate Detection using Machine LearningHelmet and License Plate Detection using Machine Learning
Helmet and License Plate Detection using Machine Learning
 
Background subtraction
Background subtractionBackground subtraction
Background subtraction
 
Visual Object Tracking: review
Visual Object Tracking: reviewVisual Object Tracking: review
Visual Object Tracking: review
 
BEV Joint Detection and Segmentation
BEV Joint Detection and SegmentationBEV Joint Detection and Segmentation
BEV Joint Detection and Segmentation
 
3D Perception for Autonomous Driving - Datasets and Algorithms -
3D Perception for Autonomous Driving - Datasets and Algorithms -3D Perception for Autonomous Driving - Datasets and Algorithms -
3D Perception for Autonomous Driving - Datasets and Algorithms -
 
The motion estimation
The motion estimationThe motion estimation
The motion estimation
 

Similar to Motion capture document

Motion capture process and systems
Motion capture process and systemsMotion capture process and systems
Motion capture process and systems
Alaa Mohamed Saeed
 
Interactive Full-Body Motion Capture Using Infrared Sensor Network
Interactive Full-Body Motion Capture Using Infrared Sensor Network  Interactive Full-Body Motion Capture Using Infrared Sensor Network
Interactive Full-Body Motion Capture Using Infrared Sensor Network
ijcga
 
Interactive full body motion capture using infrared sensor network
Interactive full body motion capture using infrared sensor networkInteractive full body motion capture using infrared sensor network
Interactive full body motion capture using infrared sensor network
ijcga
 
A Fast Single-Pixel Laser Imager for VR/AR Headset Tracking
A Fast Single-Pixel Laser Imager for VR/AR Headset TrackingA Fast Single-Pixel Laser Imager for VR/AR Headset Tracking
A Fast Single-Pixel Laser Imager for VR/AR Headset Tracking
Ping Hsu
 
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...
c.choi
 
Intelligent indoor mobile robot navigation using stereo vision
Intelligent indoor mobile robot navigation using stereo visionIntelligent indoor mobile robot navigation using stereo vision
Intelligent indoor mobile robot navigation using stereo vision
sipij
 
H017324449
H017324449H017324449
H017324449
IOSR Journals
 
Heap graph, software birthmark, frequent sub graph mining.
Heap graph, software birthmark, frequent sub graph mining.Heap graph, software birthmark, frequent sub graph mining.
Heap graph, software birthmark, frequent sub graph mining.
iosrjce
 
Human Motion Detection in Video Surveillance using Computer Vision Technique
Human Motion Detection in Video Surveillance using Computer Vision TechniqueHuman Motion Detection in Video Surveillance using Computer Vision Technique
Human Motion Detection in Video Surveillance using Computer Vision Technique
IRJET Journal
 
30120140506012 2
30120140506012 230120140506012 2
30120140506012 2
IAEME Publication
 
30120140506012 2
30120140506012 230120140506012 2
30120140506012 2
IAEME Publication
 
A Hardware Model to Measure Motion Estimation with Bit Plane Matching Algorithm
A Hardware Model to Measure Motion Estimation with Bit Plane Matching AlgorithmA Hardware Model to Measure Motion Estimation with Bit Plane Matching Algorithm
A Hardware Model to Measure Motion Estimation with Bit Plane Matching Algorithm
TELKOMNIKA JOURNAL
 
color tracking robot
color tracking robotcolor tracking robot
color tracking robot
ajay vikranth reddy.k
 
Computer Based Human Gesture Recognition With Study Of Algorithms
Computer Based Human Gesture Recognition With Study Of AlgorithmsComputer Based Human Gesture Recognition With Study Of Algorithms
Computer Based Human Gesture Recognition With Study Of Algorithms
IOSR Journals
 
Visual pattern recognition in robotics
Visual pattern recognition in roboticsVisual pattern recognition in robotics
Visual pattern recognition in robotics
IAEME Publication
 
Scanning 3 d full human bodies using kinects
Scanning 3 d full human bodies using kinectsScanning 3 d full human bodies using kinects
Scanning 3 d full human bodies using kinects
Fensa Saj
 
Robot Machine Vision
Robot Machine VisionRobot Machine Vision
Robot Machine Vision
anand hd
 
Rashawn Trotter
Rashawn TrotterRashawn Trotter
Rashawn Trotter
Darkcell
 
Fast and High-Precision 3D Tracking and Position Measurement with MEMS Microm...
Fast and High-Precision 3D Tracking and Position Measurement with MEMS Microm...Fast and High-Precision 3D Tracking and Position Measurement with MEMS Microm...
Fast and High-Precision 3D Tracking and Position Measurement with MEMS Microm...
Ping Hsu
 
76832073-3d-Machine-Vision-Systems.doc
76832073-3d-Machine-Vision-Systems.doc76832073-3d-Machine-Vision-Systems.doc
76832073-3d-Machine-Vision-Systems.doc
Kalyan Anugu
 

Similar to Motion capture document (20)

Motion capture process and systems
Motion capture process and systemsMotion capture process and systems
Motion capture process and systems
 
Interactive Full-Body Motion Capture Using Infrared Sensor Network
Interactive Full-Body Motion Capture Using Infrared Sensor Network  Interactive Full-Body Motion Capture Using Infrared Sensor Network
Interactive Full-Body Motion Capture Using Infrared Sensor Network
 
Interactive full body motion capture using infrared sensor network
Interactive full body motion capture using infrared sensor networkInteractive full body motion capture using infrared sensor network
Interactive full body motion capture using infrared sensor network
 
A Fast Single-Pixel Laser Imager for VR/AR Headset Tracking
A Fast Single-Pixel Laser Imager for VR/AR Headset TrackingA Fast Single-Pixel Laser Imager for VR/AR Headset Tracking
A Fast Single-Pixel Laser Imager for VR/AR Headset Tracking
 
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...
 
Intelligent indoor mobile robot navigation using stereo vision
Intelligent indoor mobile robot navigation using stereo visionIntelligent indoor mobile robot navigation using stereo vision
Intelligent indoor mobile robot navigation using stereo vision
 
H017324449
H017324449H017324449
H017324449
 
Heap graph, software birthmark, frequent sub graph mining.
Heap graph, software birthmark, frequent sub graph mining.Heap graph, software birthmark, frequent sub graph mining.
Heap graph, software birthmark, frequent sub graph mining.
 
Human Motion Detection in Video Surveillance using Computer Vision Technique
Human Motion Detection in Video Surveillance using Computer Vision TechniqueHuman Motion Detection in Video Surveillance using Computer Vision Technique
Human Motion Detection in Video Surveillance using Computer Vision Technique
 
30120140506012 2
30120140506012 230120140506012 2
30120140506012 2
 
30120140506012 2
30120140506012 230120140506012 2
30120140506012 2
 
A Hardware Model to Measure Motion Estimation with Bit Plane Matching Algorithm
A Hardware Model to Measure Motion Estimation with Bit Plane Matching AlgorithmA Hardware Model to Measure Motion Estimation with Bit Plane Matching Algorithm
A Hardware Model to Measure Motion Estimation with Bit Plane Matching Algorithm
 
color tracking robot
color tracking robotcolor tracking robot
color tracking robot
 
Computer Based Human Gesture Recognition With Study Of Algorithms
Computer Based Human Gesture Recognition With Study Of AlgorithmsComputer Based Human Gesture Recognition With Study Of Algorithms
Computer Based Human Gesture Recognition With Study Of Algorithms
 
Visual pattern recognition in robotics
Visual pattern recognition in roboticsVisual pattern recognition in robotics
Visual pattern recognition in robotics
 
Scanning 3 d full human bodies using kinects
Scanning 3 d full human bodies using kinectsScanning 3 d full human bodies using kinects
Scanning 3 d full human bodies using kinects
 
Robot Machine Vision
Robot Machine VisionRobot Machine Vision
Robot Machine Vision
 
Rashawn Trotter
Rashawn TrotterRashawn Trotter
Rashawn Trotter
 
Fast and High-Precision 3D Tracking and Position Measurement with MEMS Microm...
Fast and High-Precision 3D Tracking and Position Measurement with MEMS Microm...Fast and High-Precision 3D Tracking and Position Measurement with MEMS Microm...
Fast and High-Precision 3D Tracking and Position Measurement with MEMS Microm...
 
76832073-3d-Machine-Vision-Systems.doc
76832073-3d-Machine-Vision-Systems.doc76832073-3d-Machine-Vision-Systems.doc
76832073-3d-Machine-Vision-Systems.doc
 

Recently uploaded

Comparative analysis between traditional aquaponics and reconstructed aquapon...
Comparative analysis between traditional aquaponics and reconstructed aquapon...Comparative analysis between traditional aquaponics and reconstructed aquapon...
Comparative analysis between traditional aquaponics and reconstructed aquapon...
bijceesjournal
 
Software Engineering and Project Management - Introduction, Modeling Concepts...
Software Engineering and Project Management - Introduction, Modeling Concepts...Software Engineering and Project Management - Introduction, Modeling Concepts...
Software Engineering and Project Management - Introduction, Modeling Concepts...
Prakhyath Rai
 
22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt
KrishnaveniKrishnara1
 
1FIDIC-CONSTRUCTION-CONTRACT-2ND-ED-2017-RED-BOOK.pdf
1FIDIC-CONSTRUCTION-CONTRACT-2ND-ED-2017-RED-BOOK.pdf1FIDIC-CONSTRUCTION-CONTRACT-2ND-ED-2017-RED-BOOK.pdf
1FIDIC-CONSTRUCTION-CONTRACT-2ND-ED-2017-RED-BOOK.pdf
MadhavJungKarki
 
morris_worm_intro_and_source_code_analysis_.pdf
morris_worm_intro_and_source_code_analysis_.pdfmorris_worm_intro_and_source_code_analysis_.pdf
morris_worm_intro_and_source_code_analysis_.pdf
ycwu0509
 
132/33KV substation case study Presentation
132/33KV substation case study Presentation132/33KV substation case study Presentation
132/33KV substation case study Presentation
kandramariana6
 
Object Oriented Analysis and Design - OOAD
Object Oriented Analysis and Design - OOADObject Oriented Analysis and Design - OOAD
Object Oriented Analysis and Design - OOAD
PreethaV16
 
Software Engineering and Project Management - Software Testing + Agile Method...
Software Engineering and Project Management - Software Testing + Agile Method...Software Engineering and Project Management - Software Testing + Agile Method...
Software Engineering and Project Management - Software Testing + Agile Method...
Prakhyath Rai
 
Data Driven Maintenance | UReason Webinar
Data Driven Maintenance | UReason WebinarData Driven Maintenance | UReason Webinar
Data Driven Maintenance | UReason Webinar
UReason
 
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODEL
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELDEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODEL
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODEL
ijaia
 
原版制作(Humboldt毕业证书)柏林大学毕业证学位证一模一样
原版制作(Humboldt毕业证书)柏林大学毕业证学位证一模一样原版制作(Humboldt毕业证书)柏林大学毕业证学位证一模一样
原版制作(Humboldt毕业证书)柏林大学毕业证学位证一模一样
ydzowc
 
Design and optimization of ion propulsion drone
Design and optimization of ion propulsion droneDesign and optimization of ion propulsion drone
Design and optimization of ion propulsion drone
bjmsejournal
 
一比一原版(uofo毕业证书)美国俄勒冈大学毕业证如何办理
一比一原版(uofo毕业证书)美国俄勒冈大学毕业证如何办理一比一原版(uofo毕业证书)美国俄勒冈大学毕业证如何办理
一比一原版(uofo毕业证书)美国俄勒冈大学毕业证如何办理
upoux
 
An Introduction to the Compiler Designss
An Introduction to the Compiler DesignssAn Introduction to the Compiler Designss
An Introduction to the Compiler Designss
ElakkiaU
 
CEC 352 - SATELLITE COMMUNICATION UNIT 1
CEC 352 - SATELLITE COMMUNICATION UNIT 1CEC 352 - SATELLITE COMMUNICATION UNIT 1
CEC 352 - SATELLITE COMMUNICATION UNIT 1
PKavitha10
 
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
IJECEIAES
 
Embedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoringEmbedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoring
IJECEIAES
 
VARIABLE FREQUENCY DRIVE. VFDs are widely used in industrial applications for...
VARIABLE FREQUENCY DRIVE. VFDs are widely used in industrial applications for...VARIABLE FREQUENCY DRIVE. VFDs are widely used in industrial applications for...
VARIABLE FREQUENCY DRIVE. VFDs are widely used in industrial applications for...
PIMR BHOPAL
 
AI + Data Community Tour - Build the Next Generation of Apps with the Einstei...
AI + Data Community Tour - Build the Next Generation of Apps with the Einstei...AI + Data Community Tour - Build the Next Generation of Apps with the Einstei...
AI + Data Community Tour - Build the Next Generation of Apps with the Einstei...
Paris Salesforce Developer Group
 
Computational Engineering IITH Presentation
Computational Engineering IITH PresentationComputational Engineering IITH Presentation
Computational Engineering IITH Presentation
co23btech11018
 

Recently uploaded (20)

Comparative analysis between traditional aquaponics and reconstructed aquapon...
Comparative analysis between traditional aquaponics and reconstructed aquapon...Comparative analysis between traditional aquaponics and reconstructed aquapon...
Comparative analysis between traditional aquaponics and reconstructed aquapon...
 
Software Engineering and Project Management - Introduction, Modeling Concepts...
Software Engineering and Project Management - Introduction, Modeling Concepts...Software Engineering and Project Management - Introduction, Modeling Concepts...
Software Engineering and Project Management - Introduction, Modeling Concepts...
 
22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt
 
1FIDIC-CONSTRUCTION-CONTRACT-2ND-ED-2017-RED-BOOK.pdf
1FIDIC-CONSTRUCTION-CONTRACT-2ND-ED-2017-RED-BOOK.pdf1FIDIC-CONSTRUCTION-CONTRACT-2ND-ED-2017-RED-BOOK.pdf
1FIDIC-CONSTRUCTION-CONTRACT-2ND-ED-2017-RED-BOOK.pdf
 
morris_worm_intro_and_source_code_analysis_.pdf
morris_worm_intro_and_source_code_analysis_.pdfmorris_worm_intro_and_source_code_analysis_.pdf
morris_worm_intro_and_source_code_analysis_.pdf
 
132/33KV substation case study Presentation
132/33KV substation case study Presentation132/33KV substation case study Presentation
132/33KV substation case study Presentation
 
Object Oriented Analysis and Design - OOAD
Object Oriented Analysis and Design - OOADObject Oriented Analysis and Design - OOAD
Object Oriented Analysis and Design - OOAD
 
Software Engineering and Project Management - Software Testing + Agile Method...
Software Engineering and Project Management - Software Testing + Agile Method...Software Engineering and Project Management - Software Testing + Agile Method...
Software Engineering and Project Management - Software Testing + Agile Method...
 
Data Driven Maintenance | UReason Webinar
Data Driven Maintenance | UReason WebinarData Driven Maintenance | UReason Webinar
Data Driven Maintenance | UReason Webinar
 
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODEL
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELDEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODEL
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODEL
 
原版制作(Humboldt毕业证书)柏林大学毕业证学位证一模一样
原版制作(Humboldt毕业证书)柏林大学毕业证学位证一模一样原版制作(Humboldt毕业证书)柏林大学毕业证学位证一模一样
原版制作(Humboldt毕业证书)柏林大学毕业证学位证一模一样
 
Design and optimization of ion propulsion drone
Design and optimization of ion propulsion droneDesign and optimization of ion propulsion drone
Design and optimization of ion propulsion drone
 
一比一原版(uofo毕业证书)美国俄勒冈大学毕业证如何办理
一比一原版(uofo毕业证书)美国俄勒冈大学毕业证如何办理一比一原版(uofo毕业证书)美国俄勒冈大学毕业证如何办理
一比一原版(uofo毕业证书)美国俄勒冈大学毕业证如何办理
 
An Introduction to the Compiler Designss
An Introduction to the Compiler DesignssAn Introduction to the Compiler Designss
An Introduction to the Compiler Designss
 
CEC 352 - SATELLITE COMMUNICATION UNIT 1
CEC 352 - SATELLITE COMMUNICATION UNIT 1CEC 352 - SATELLITE COMMUNICATION UNIT 1
CEC 352 - SATELLITE COMMUNICATION UNIT 1
 
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
 
Embedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoringEmbedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoring
 
VARIABLE FREQUENCY DRIVE. VFDs are widely used in industrial applications for...
VARIABLE FREQUENCY DRIVE. VFDs are widely used in industrial applications for...VARIABLE FREQUENCY DRIVE. VFDs are widely used in industrial applications for...
VARIABLE FREQUENCY DRIVE. VFDs are widely used in industrial applications for...
 
AI + Data Community Tour - Build the Next Generation of Apps with the Einstei...
AI + Data Community Tour - Build the Next Generation of Apps with the Einstei...AI + Data Community Tour - Build the Next Generation of Apps with the Einstei...
AI + Data Community Tour - Build the Next Generation of Apps with the Einstei...
 
Computational Engineering IITH Presentation
Computational Engineering IITH PresentationComputational Engineering IITH Presentation
Computational Engineering IITH Presentation
 

Motion capture document

  • 1. 1 1.INTRODUCTION Motion capture, motion tracking, or mocap are terms used to describe the process of recording movement and translating that movement onto a digital model. It is used in military, entertainment, robotics. In filmmaking it refers to recording actions of human actors, and using that information to animated digital character models in 2D or 3D computer animation. When it includes face, fingers and captures subtle expressions, it is often referred to as performance capture. In motion capture sessions, movements of one or more actors are sampled many times per second, although with most techniques (recent developments from Wet a use images for 2D motion capture and project into 3D) motion capture records only the movements of the actor, not his/her visual appearance. This animation data is mapped to a 3D model so that the model performs the same actions as the actor. This is comparable to the older technique of rotoscope, such as the 1978 The Lord of the Rings animated film where the visual appearance of the motion of an actor was filmed, then the film used as a guide for the frame by frame motion of a hand- drawn animated character. 2. HISTORY OF MOTION CAPTURE The use of motion capture to animate characters on computers is relatively recent, it started in the 1970s, and now just beginning to spread. Motion capture (collection of movement) is recording the movements of the human body (or any other movement) for immediate analysis or decals. The captured information can be as simple as catching body position in space or as complex as a capture of the face and the deformation of the muscles. The captured motion can be exported to
  • 2. 2 various forms like bvh, bip, fbx etc., which can be used to animate 3d characters in3ds max, maya, poser, iclone, blender etc, You can download free motion capture data from this blog. Motion capture for animation is the superposition of human movement on their virtual identities. This capture can be direct, such as the animation arm of a virtual function of movement of an arm, or indirect, such as that of a human hand with a more thorough as the effect of light or color. The idea of copying human motion is of course not new. To make the most convincing human movement, in "Snow White", Disney studios design animation film on a film, play or real players. This method called "rotoscoping" has been used successfully since then. In 1970, when he began to be possible to animate characters by computer, Animationer have adapted traditional techniques, including the "rotoscoping". Today, technology is catching is good and diverse, we can classify them into three broad categories: Mechanical motion capture The optical motion capture The magnetic motion capture Now although this technique is effective, it still contains some problems (weight, cost, ...). But against any doubt that the motion capture will become one of the basic tools of animation have adopted traditional techniques, including the “mocap” Today, technology is catching is good and diverse, we can classify into three broad categories:  Mechanical motion capture  The optical motion capture  The magnetic motion capture Now although this technique is effective, it still contains some problems. but against any doubt that the motion capture will become one of the basic tool of animation
  • 3. 3 3. TYPES OF MOTION CAPTURES 3.1 MECHANICAL MOTION CAPTURE This technique of motion capture is achieved through the use of an exoskeleton. Each joint is then connected to an angular encoder. The value of movement of each encoder (rotation etc....) is recorded by a computer that by knowing the relative position encoders (and therefore joints) can rebuild these movements on the screen using software. An offset is applied to each encoder, because it is very difficult to match exactly their position with that of the real relationship (and especially in the case of human movements). 3.1.1.ADVANTAGES AND DISADVANTAGES This technique offers high precision and it has the advantage of not being influenced by external factors (such as quality or the number of cameras for optical mocap).But the catch is limited by mechanical constraints related to the implementation of the encoders and the exoskeleton. It should be noted that the exoskeleton generally use wired connections to connect the encoders to the computer. For example, there is much more difficult to move with a fairly heavy exoskeleton and connected to a large number of simple sons with small reflective
  • 4. 4 spheres: the freedom of movement is rather limited. The accuracy of reproduction of the movement depends on the position encoders and modeling of the skeleton. It must match the size of the exoskeleton at each morphology. The big disadvantage comes from the coders themselves because if they are of great precision between them it cannot move the object to capture in a so true. In effect, then use the methods of optical positioning to place the animation in a decor. Finally, each object to animate to need an exoskeleton over it is quite complicated to measure the interaction of several exoskeleton. Thereby bringing about a scene involving several people will be very difficult to implement. 3.2MAGNETIC MOTION CAPTURE The magnetic motion capture Magnetic motion capture is done through a field of electro-Magenta is introduced in which sensors are coils of sensors electriques. Les son are represented on a place mark in 3 axes x, y, z .To determine their position on the capture field disturbance created by a son through an antenna, then we can know its orientation. 3.2.1 ADVANTAGES AND DISADVANTAGES
  • 5. 5 The advantage of this method is that data captured is accurate and no further calculations, excluding from the calculation of position, is useful in handling. But any metal object disturbs the magnetic field and distorts the data. 3.3OPTICALMOTION CAPTURE The capture is based on optical shooting several synchronized cameras, the synthesis of coordinates (x, y) of the same object from different angles, allows to deduce the cordinates (x, y, z). This method involves the consideration of complex problems such as optical parallax, distortion lens used, etc.. The signal thus undergoes many interpolations. However, a correct calibration of these parameters allows a high accuracy of data collected. The operating principle is similar to radar: the cameras emit radiation (usually red and / or infrared), reflected by the markers (whose surface is composed of ultra-reflecting material) and then returned to the same cameras. These are sensitive to one type of wavelength and viewing the markers in the form of white spots videos (or grayscale for the latest cameras). Checking. the information of each camera (2 cameras therefore minimum) to determine the position of the marker in the virtual space.
  • 6. 6 4. METHODS AND SYSTEM Motion tracking or motion capture started as a photogrammetric analysis tool in biomechanics research in the 1970s and 1980s, and expanded into education, training, sports and recently computer animation for television, cinema and video games as the technology matured. A performer wears markers near each joint to identify the motion by the positions or angles between the markers. Acoustic, inertial, LED, magnetic or reflective markers, or combinations of any of these, are tracked, optimally at least two times the frequency rate of the desired motion, to sub millimeter positions 4.1 OPTICAL SYSTEMS Optical systems utilize data captured from image sensors to triangulate the 3D position of a subject between one or more cameras calibrated to provide overlapping projections. Data acquisition is traditionally implemented using special markers attached to an actor; however, more recent systems are able to generate accurate data by tracking surface features identified dynamically for each particular subject. Tracking a large number of performers or expanding the capture area is accomplished by the addition of more cameras. These systems produce data with3 degrees of freedom for each marker, and rotational information must be inferred from the relative orientation of three or more markers; for instance shoulder, elbow and wrist markers providing the angle of the elbow. 4.1.1 PASSIVE MARKERS
  • 7. 7 A dancer wearing a suit used in an optical motion capture system. Figure 4.2 Several markers are placed at specific points on an actor’s face during facial optical motion capture. Passive optical system use markers coated with a retro reflective material to reflect light back that is generated near the cameras lens. The cameras threshold can be adjusted so only the bright reflective markers will be sampled, ignoring skin and fabric. The centroid of the marker is estimated as a position within the 2 dimensional image that is captured. The grayscale value of each pixel can be used to provide sub-pixel accuracy by finding the centroid of the Gaussian.
  • 8. 8 An object with markers attached at known positions is used to calibrate the cameras and obtain their positions and the lens distortion of each camera is measured. Providing two calibrated cameras see a marker, a 3 dimensional fix can be obtained. Typically a system will consist of around 6 to 24 cameras. Systems of over three hundred cameras exist to try to reduce marker swap. Extra cameras are required for full coverage around the capture subject and multiple subjects. Vendors have constraint software to reduce problems from marker swapping since all markers appear identical. Unlike active marker systems and magnetic systems, passive systems do not require the user to wear wires or electronic equipment. Instead, hundreds of rubber balls are attached with reflective tape, which needs to be replaced periodically. The markers are usually attached directly to the skin (as in biomechanics), or they are velcroed to a performer wearing a full body spandex/lycra suit designed specifically for motion capture. This type of system can capture large numbers of markers at frame rates as high as 2000fps. The frame rate for a given system is often balanced between resolution and speed: a 4-megapixel system normally runs at370 hertz, but can reduce the resolution to .3 megapixels and then run at 2000 hertz. Typical systems are $100,000 for 4-megapixel 360-hertz systems, and $50,000 for .3- megapixel 120-hertz systems 4.1.2 ACTIVE MARKER Active optical systems triangulate positions by illuminating one LED at a time very quickly or multiple LEDs with software to identify them by their relative positions, somewhat akin to celestial navigation. Rather than reflecting light back that is generated externally, the markers themselves are powered to emit
  • 9. 9 their own light. Since Inverse Square law provides 1/4 the power at 2 times the distance, this can increase the distances and volume for capture. The TV series ("Star gate SG1") episode was produced using an active optical system for the VFX. The actor had to walk around props that would make motion capture difficult for other non-active optical systems. ILM used active Markers in Van Helsing to allow capture of the Harpies on very large sets. The power to each marker can be provided sequentially in phase with the capture system providing a unique identification of each marker for a given capture frame at a cost to the resultant frame rate. The ability to identify each marker in this manner is useful in real time applications. The alternative method of identifying markers is to do it algorithmically requiring extra processing of the data. 4.1.3 TIME MODULATED ACTIVE MARKER Figure 4.3 A high-resolution active marker system with 3,600 × 3,600 resolution at 480 hertz providing real time sub millimeter positions.
  • 10. 10 Active marker systems can further be refined by strobing one marker on at a time, or tracking multiple markers over time and modulating the amplitude or pulse width to provide marker ID.12 megapixel spatial resolution modulated systems show more subtle movements than 4megapixel optical systems by having both High resolution and high speed. These motion capture systems are typically under $50,000 for an eight camera, 12 megapixel spatial resolution 480 hertz system with one actor. Figure 4.4 IR sensors can compute their location when lit by mobile multi-LED emitters, e.g. in a moving car. With Id per marker, these sensor tags can be worn under clothing and tracked at 500 Hz in broad daylight 4.1.4 SEMI-PASSIVE IMPERCEPTIBLE MARKER One can reverse the traditional approach based on high speed cameras. Systems use in expensive multi-LED high speed projectors. The specially built multi-LED IR projectors optically encode the space. Instead of retro-reflective or active light emitting diode (LED) markers, the system uses photosensitive marker tags to decode the optical signals. By attaching tags with photo sensors to scene points, the tags can compute not only their own locations of each point, but also their own orientation, incident illumination, and reflectance.
  • 11. 11 These tracking tags that work in natural lighting conditions and can be imperceptibly embedded in attire or other objects. The system supports an unlimited number of tags in a scene, with each tag uniquely identified to eliminate marker reacquisition issues. Since the system eliminates a high speed camera and the corresponding high-speed image stream, it requires significantly lower data bandwidth. The tags also provide incident illumination data which can be used to match scene lighting when inserting synthetic elements. The technique appears ideal for on-set motion capture or real-time broadcasting of virtual sets but has yet to be proven. 4.1.5 MARKER LESS Emerging techniques and research in computer vision are leading to the rapid development of the marker less approach to motion capture. Marker less systems such as those developed at Stanford, University of Maryland, MIT, and Max Planck Institute, do not require subjects to wear special equipment for tracking. Special computer algorithms are designed to allow the system to analyze multiple streams of optical input and identify human forms, breaking them down into constituent parts for tracking. Applications of this technology extend deeply into popular imagination about the future of computing technology. Several commercial solutions for marker less motion capture have also been introduced. Products currently under development include Microsoft’s Kinect system for PC and console systems 4.2 NON-OPTICAL SYSTEMS 4.2.1 INERTIAL SYSTEMS
  • 12. 12 Inertial Motion Capture technology is based on miniature inertial sensors, biomechanical models and sensor fusion algorithms. The motion data of the inertial sensors (inertial guidance system) is often transmitted wirelessly to a computer, where the motion is recorded or viewed. Most inertial systems use gyroscopes to measure rotational rates. These rotations are translated to a skeleton in the software. Much like optical markers, the more gyros the more natural the data. No external cameras, emitters or markers are needed for relative motions. Inertial mocap systems capture the full six degrees of freedom body motion of a human in real-time. Benefits of using Inertial systems include: no solving, portability, and large capture areas. Disadvantages include lower positional accuracy and positional drift which can compound over time. These systems are similar to the Wii controllers but are more sensitive and have greater resolution and update rates. They can accurately measure the direction to the ground to within a degree. The popularity of inertial systems is rising amongst independent game developers, mainly because of the quick and easy set up resulting in a fast pipeline. A range of suits are now available from various manufacturers and base prices range from $25,000 to $80,000 USD 4.2.2 MECHANICALMOTION Mechanical motion capture systems directly track body joint angles and are often referred to as exo-skeleton motion capture systems, due to the way the sensors are attached to the body. A performer attaches the skeletal-like structure to their body and as they move so do the articulated mechanical parts, measuring the performer’s relative motion. Mechanical motion capture systems are real-time, relatively low- cost, free-of-occlusion, and wireless (untethered) systems that have unlimited capture volume. Typically, they are rigid structures of jointed, straight metalor
  • 13. 13 plastic rods linked together with potentiometers that articulate at the joints of the body. These suits tend to be in the $25,000 to $75,000 range plus an external absolute positioning system 4.2.3 MAGNETIC SYSTEMS Magnetic systems calculate position and orientation by the relative magnetic flux of three orthogonal coils on both the transmitter and each receiver. The relative intensity of the voltage or current of the three coils allows these systems to calculate both range and orientation by meticulously mapping the tracking volume. The sensor output is 6DOF, which provides useful results obtained with two-thirds the number of markers required in optical systems; one on upper arm and one on lower arm for elbow position and angle. The markers are not occluded by nonmetallic objects but are susceptible to magnetic and electrical interference from metal objects in the environment, like rebar (steel reinforcing bars in concrete) or wiring, which affect the magnetic field, and electrical sources such as monitors, lights, cables and computers. The sensor response is nonlinear, especially toward edges of the capture area. The wiring from the sensors tends to preclude extreme performance movements. The capture volumes for magnetic systems are dramatically smaller than they are for optical systems. With the magnetic systems, there is adistinction between “AC” and “DC” systems: one uses square pulses, the other uses sine wave pulse 5. HUMAN MOCAP The science of human motion analysis is fascinating because of its highly interdisciplinary nature and wide range of applications. Histories of science usually begin with the ancient Greeks, who first left a record of human inquiry concerning
  • 14. 14 the nature of the world in relationship to our powers of perception. Aristotle (384- 322 B.C.) might be considered the first bio mechanician Hewrote the book called ’De Motu Animalium’ - On the Movement of Animals. He not only saw . animals’ bodies as mechanical systems, but pursued such questions as the physiological difference between imagining performing an action and actually doing it. Figure 4.5 Nearly two thousand years later, in his famous anatomic drawings, Leonardo da Vinci (1452-1519) sought to describe the mechanics of standing, walking up and down hill, rising from a sitting position, and jumping. Galileo (1564-1643) followed a hundred years later with some of the earliest attempts to mathematically analyze physiologic function. Building on the work of Galilei, Borelli (1608-1679) figured out the forces required for equilibrium in various joints of the human body well before Newton published the laws of motion. He also determined the position of the human center of gravity, calculated and measured inspired and expired air volumes, and showed that inspiration is muscle-driven and expiration is due to tissue elasticity. The early work of these pioneers of biomechanics was followed up by Newton (1642-1727),Bernoulli (1700-1782), Euler (1707-1783), Poiseuille (1799-1869), Young (1773-1829), and others of equal fame. Muybridge (1830- 1904) was the first photographer to dissect human and animal motion (see figure at
  • 15. 15 heading human motion analysis). This technique was first used scientifically by Marey (1830-1904), who correlated ground reaction forces with movement and pioneered modern motion analysis. In the 20th century, many researchers and (biomedical)engineers contributed to an increasing knowledge of human kinematics and kinetics. This paper will give a short overview of the technologies used in these fields 5.1 Human motion analysis Many different disciplines use motion analysis systems to capture movement and posture of the human body. Basic scientists seek a better understanding of the mechanisms that are used to translate muscular contractions about articulating joints into functional accomplishment, e.g. walking. Increasingly, researchers endeavor to better appreciate the relationship between the human motor control system and gait dynamics Figure 5.1 Figure 5.2 In the realm of clinical gait analysis, medical professionals apply an evolving knowledge base in the interpretation of the walking patterns of impaired ambulators for the planning of treatment protocols, e.g. orthotic prescription and surgical intervention and allow the clinician to determine the extent to which an individual’s gait pattern has been affected by an already diagnosed
  • 16. 16 disorder. With respect to sports, athletes and their coaches use motion analysis techniques in acease less quest for improvements in performance while avoiding injury. The use of motion capture for computer character animation or virtual reality (VR) applications is relatively new. The information captured can be as general as the position of the body in space or as complex as the deformations of the face and muscle masses. The mapping can be direct, such as human arm motion controlling a character’s arm motion, or indirect, such as human hand and finger patterns controlling a character’s skin color or emotional state. The idea of copying human motion for animated characters is, of course, not new. To get convincing motion for the human characters in Snow White, Disney studios traced animation over film footage of live actors playing out the scenes. This method, called rotoscoping, has been successfully used for human characters. In thelate’70’s, when it began to be feasible to animate characters by computer, animators adapted traditional techniques, including rotoscoping. Generally, motion analysis data collection protocols, measurement precision, and data reduction models have been developed to meet the requirements for their specific settings. For example, sport assessments generally require higher data acquisition rates because of increased velocities compared to normal walking. In VR applications, real-time tracking is essential for a realistic experience of the user, so the time lag should be kept to a minimum. Years of technological development has resulted into many systems can be categorized in mechanical, optical, magnetic, acoustic and inertial trackers. The human body is often considered as a system of rigid links connected by joints. Human body parts are not actually rigid structures, but they are customarily treated as such during studies of human motion. Mechanical trackers utilize rigid or
  • 17. 17 flexible goniometers which are worn by the user. Goniometers within the skeleton linkages have a general correspondence to the joints of the user. These angle measuring devices provide joint angle data to kinematic algorithms which are used to determine body posture. Attachment of the body-based linkages as well as the positioning of the goniometers present several problems. The soft tissue of the body allows the position of the linkages relative to the body to change as motion occurs. Even without these changes, alignment of the goniometer with body joints is difficult. This is specifically true for multiple degree of freedom (DOF) joints, like the shoulder. Due to variations in anthropometric measurements, body-based systems must be recalibrated for each user. Figure 5.3 Optical sensing encompasses a large and varying collection of technologies. Image-based systems determine position by using multiple cameras to track predetermined points (markers) on the subject’s body segments, aligned with specific bony landmarks. Position is estimated through the use of multiple 2D images of the working volume. Stereometric techniques correlate common tracking points on the tracked objects in each image and use this information along with knowledge concerning the relationship between each of the images and camera parameters to calculate position. The markers can either be passive (reflective) or active (light emitting).Reflective systems use infrared (IR) LED’s
  • 18. 18 mounted around the camera lens, along with IR pass filters placed over the camera lens and measure the light reflected from the markers. Optical systems based on pulsed-LED’s measure the infrared light emitted by the LED’s placed on the body segments. Also camera tracking of natural objects without the aid of markers is possible, but in general less accurate. It is largely based on computer vision techniques of pattern recognition and often requires high computational resources. Structured light systems use lasers or beamed light to create a plane of light that is swept across the image. They are more appropriate for mapping applications than dynamic tracking of human body motion. Optical systems suffer from occlusion (line of sight) problems whenever a required light path is blocked. Interference from other light sources or reflections may also be a problem which can result in so- called ghost markers. Figure 5.4 Magnetic motion capture systems utilize sensors placed on the body to measure the low- frequency magnetic fields generated by a transmitter source. The transmitter source is constructed of three perpendicular coils that emit a magnetic field when a current is applied. The current is sent to these coils in a sequence that creates three mutually perpendicular fields during each measurement cycle. The 3D sensors measure the strength of those fields which is proportional to the distance of each coil from the field emitter assembly. The
  • 19. 19 sensors and source are connected to a processor that calculates position and orientation of each sensor based on its nine measured field values. Magnetic systems do not suffer from line of sight problems because the human body is transparent for the used magnetic fields. However, the shortcomings of magnetic tracking systems are directly related to the physical characteristics of magnetic fields. Magnetic fields decrease in power rapidly as the distance from the generating source increases and so they can easily be disturbed by (Ferro) magnetic materials within the measurement volume. Acoustic tracking systems use ultrasonic pulses and can determine position through either time-of-flight of the pulses and triangulation or phase coherence. Both outside-in and inside- out implementations are possible, which means the transmitter can either be placed on a body segment or fixed in the measurement volume. The physics of sound limit the accuracy, update rate and range of acoustic tracking systems. A clear line of sight must be maintained and tracking can be disturbed by reflections of the sound. Inertial sensors use the property of bodies to maintain constant translational and rotational velocity, unless disturbed by forces or torques, respectively. The vestibular system, located in the inner ear, is a biological 3D inertial sensor. It can sense angular motion as well as linear acceleration of the head. The vestibular system is important for maintaining balance and stabilization of the eyes relative to the environment. Practical inertial tracking is made possible by advances in miniaturized and micro machined sensor technologies, particularly in silicon accelerometers and rate sensors. A rate gyroscope measures angular velocity, and if integrated over time provides the change in angle with respect to an initially known angle.
  • 20. 20 Figure 5.5 An accelerometer measures accelerations, including gravitational acceleration g. If the angle of the sensor with respect to the vertical is known, the gravity component can be removed and by numerical integration, velocity and position can be determined. Noise and bias errors associated with small and inexpensive sensors make it impractical to track orientation and position for longtime periods if no compensation is applied. By combining the signals from the inertial sensors with aiding/complementary sensors and using knowledge about their signal characteristics, drift and other errors can be minimized. 5.2 AMBULATARY TRACKING Commercial optical systems such as Vicon (reflective markers) or Optotrak (active markers) are often considered as a standard’ in human motion analysis. Although these systems provide accurate position information, there are some important limitations. The most important factors are the high costs, occlusion problems and limited measurement volume. The use of a specialized laboratory with fixed equipment impedes many applications, like monitoring of daily life activities, control of prosthetics or assessment of workload in ergonomic studies. In the past few years, the health care system trend toward early discharge to monitor and train patients in their own environment. This has
  • 21. 21 promoted a large development of non-invasive portable and wearable systems. Inertial sensors have been successfully applied for such clinical measurements outside the lab. Moreover, it has opened many possibilities to capture motion data for athletes oranimation purposes without the need for a studio. The orientation obtained by present-day micro machined gyroscopes typically shows an increasing error of degrees per minute. For accurate and drift free orientation estimation Xsenshas developed an algorithm to combine the signals from 3D gyroscopes, accelerometers and magnetometers. Accelerometers are used to determine the direction of the local vertical by sensing acceleration due to gravity. Magnetic sensors provide stability in the horizontal plane by sensing the direction of the earth magnetic field like a compass. Data from these complementary sensors are used to eliminate drift by continuous correction of the orientation obtained by angular rate sensor data. This combination is also known as an attitude and heading reference system(AHRS). For human motion tracking, the inertial motion trackers are placed on each body segment to be tracked. The inertial motion trackers give absolute orientation estimates which are also used to calculate the 3D linear accelerations in world coordinates which in turn give translation estimates of the body segments.
  • 22. 22 Figure 5.6 Since the rotation from sensor to body segment and its position with respect to the axes of rotation are initially unknown, a calibration procedure is necessary. An advanced articulated body model constraints the movements of segments with respect to each other and eliminates any integration drift. 5.3 INERTIAL SENSORS A single axis accelerometer consists of a mass, suspended by a spring in a housing. Springs(within their linear region) are governed by a physical principle known as Hooke’s law. Hooke’ slaw states that a spring will exhibit a restoring force which is proportional to the amount it has been expanded or compressed. Specifically, F = kx, where k is the constant of proportionality between displacement x and force F. The other important physical principle is that of Newton’s second law of motion which states that a force operating on a mass which is accelerated will exhibit a force with a magnitude F = ma. This force causes the mass to either compress or expand the spring under the constraint that F = ma = kx. Hence an acceleration a will cause the mass to be displaced by x = ma/k or, if we observe a displacement of x, we know the mass has undergone an acceleration of a = kx/m. In this way, the problem of measuring acceleration has been turned into one of measuring the displacement of a mass connected to a spring. In order to measure multiple axes of acceleration, this system needs to be duplicated along each of the required axes. Gyroscopes are instruments that are used to measure angular motion. There are two broad categories: (1) mechanical gyroscopes and (2) optical gyroscopes. Within both of these categories, there are many different types available.
  • 23. 23 Figure 5.7 The first mechanical gyroscope was built by Foucault in 1852, as a gimbaled wheel that stayed fixed in space due to angular momentum while the platform rotated around it. Mechanical gyroscopes operate on the basis of conservation of angular momentum by sensing the change indirection of an angular momentum. According to Newton’s second law, the angular momentum of a body will remain unchanged unless it is acted upon by a torque. The fundamental equation describing the behavior of the gyroscope is where the vectors tau and L are, the torque on the gyroscope and its angular momentum, respectively . The scalar I is its moment of inertia, the vector omega is its angular velocity, and the vector alpha is its angular acceleration.
  • 24. 24 Figure 5.8 Gimbaled and laser gyroscopes are not suitable for human motion analysis due to their large size and high costs. Over the last few years, micro electromechanical machined (MEMS) inertial sensors have become more available. Vibrating mass gyroscopes are small, inexpensive and have low power requirements, making them ideal for human movement analysis. A vibrating element(vibrating resonator), when rotated, is subjected to the Coriolis effect that causes secondary vibration orthogonal to the original vibrating direction. By sensing the secondary vibration, the rate of turn can be detected. The Coriolis force is given by: where m is the mass, v the momentary speed of the mass relative to the moving object to which it is attached and omega the angular velocity of that object. Various micro machined geometries are available, of which many use the piezo-electric effect for vibration exert and detection 5.4 SENSOR FUSION The traditional application area of inertial sensors is navigation as well as guidance and stabilization of military systems. Position, velocity and attitude are obtained using accurate, but large gyroscopes and accelerometers, in combination with other measurement devices such as GPS, radar or a baro
  • 25. 25 altimeter. Generally, signals from these devices are fused using a Kalman filter to obtain quantities of interest (see figure below). The Kalman filter is useful for combining data from several different indirect and noisy measurements. It weights the sources of information appropriately with knowledge about the signal characteristics based on their models to make the best use of all the data from each of the sensors. There is no perfect sensor; each type has its strong and weak points. The idea behind sensor fusion is that characteristics of one type of sensor are used to overcome the limitations of another sensor. For example, magnetic sensors are used as a reference to prevent the gyroscope integration drift about the vertical axis in the orientation estimates. However, iron and other magnetic materials will disturb the local magnetic field and as a consequence, the orientation estimate. The spatial and temporal features of magnetic disturbances will be different from those related to gyroscope drift errors. Figure 5.9 This figure: Complementary Kalman filter structure for position and orientation estimates combining inertial and aiding measurements. The signals from the IMU (a − g and w) provide the input for the INS. By double integration of the accelerations, the position is estimated at a high frequency. At a feasible lower frequency, the aiding system provides position estimates. The
  • 26. 26 difference between the inertial and aiding estimates is delivered to the Kalman filter. Based on the system model the Kalman filters estimates the propagation of the errors. The outputs of the filter are fed back to correct the position, velocity, acceleration and orientation estimates. Using this a priori knowledge, the effects of both drift and disturbances can be minimized. The inertial sensors of the inertial navigation system (INS) can be mounted on vehicles in such a way that they stay leveled and pointed in a fixed direction. This system relies on a set of gimbals and sensors attached on three axes to monitor the angles at all times. Another type of INS is the strap down system that eliminates the use of gimbals which is >suitable for human motion analysis. In this case, the gyros and accelerometers are mounted directly to the structure of the vehicle or strapped on the body segment. The measurements are made in reference to the local axes of roll, pitch, and heading (or yaw). The clinical reference system provides anatomically meaningful definitions of main segmental movements (e.g. flexion-extension, abduction-adduction or supination-pronation). Figure 5.10 Figure 5.11 6. ADVANTAGES Motion capture offers several advantages over traditional computer animation of a 3D model:
  • 27. 27  More rapid, even real time results can be obtained. In entertainment applications this can reduce the costs of key frame-based animation. For example: Hand Over  The amount of work does not vary with the complexity or length of the performance to the same degree as when using traditional techniques. This allows many tests to be done with different styles or deliveries.  Complex movement and realistic physical interactions such as secondary motions, weight and exchange of forces can be easily recreated in a physically accurate manner.  The amount of animation data that can be produced within a given time is extremely large when compared to traditional animation techniques. This contributes to both cost effectiveness and meeting production deadlines.  Potential for free software and third party solutions reducing its costs 7. DISADVANTAGES  Specific hardware and special programs are required to obtain and process the data.  The cost of the software, equipment and personnel required can potentially be prohibitive for small productions.  The capture system may have specific requirements for the space it is operated in, depending on camera field of view or magnetic distortion.  When problems occur it is easier to reshoot the scene rather than trying to manipulate the data. Only a few systems allow real time viewing of the data to decide if the take needs to be redone.
  • 28. 28  The initial results are limited to what can be performed within the capture volume without extra editing of the data.  Movement that does not follow the laws of physics generally cannot be captured.  Traditional animation techniques, such as added emphasis on anticipation and follow through, secondary motion or manipulating the shape of the character, as with squash and stretch animation techniques, must be added later.  If the computer model has different proportions from the capture subject, artifacts may occur. For example, if a cartoon character has large, over- sized hands, these may intersect the characters body if the human performer is not careful with their physical motion. 8. APPLICATIONS  Video games often use motion capture to animate athletes, martial artists, and other in- game characters. This has been done since the Atari Jaguar CD-based game Highlander: The Last of the MacLeods, released in 1995.  Movies use motion capture for CG effects, in some cases replacing traditional cel animation, and for completely computer-generated creatures, such as Jar Binks, Gollum, The Mummy, King Kong, and the Navi from the film Avatar.  Sinbad: Beyond the Veil of Mists was the first movie made primarily with motion capture, although many character animators also worked on the film.
  • 29. 29  In producing entire feature films with computer animation, the industry is currently split between studios that use motion capture, and studios that do not. Out of the three nominees for the 2006 Academy Award for Best Animated Feature, two of the nominees (Monster House and the winner Happy Feet) used motion capture, and only Disney·Pixars Cars was animated without motion capture. In the ending credits of Pixars film Ratatouille, a stamp appears labeling the film as "100% Pure Animation -- No Motion Capture!".  Motion capture has begun to be used extensively to produce films which attempt to simulate or approximate the look of live-action cinema, with nearly photorealistic digital character models. The Polar Express used motion capture to allow Tom Hanks to perform as several distinct digital characters (in which he also provided the voices). The 2007 adaptation of the saga Beowulf animated digital characters whose appearances were based in part on the actors who provided their motions and voices. James Cameron’s Avatar used this technique to create the Navi that inhabit Pandora. The Walt Disney Company has announced that it will distribute Robert Zemeckiss A Christmas Carol and Tim Burtons Alice in Wonderland using this technique. Disney has also acquired Zemeckis Image Movers Digital that produces motion capture films.  Television series produced entirely with motion capture animation include Laflaque in Canada, Sprookjesboom and Cafe de Wereld in The Netherlands, and Head cases in the UK.  Virtual Reality and Augmented Reality allow users to interact with digital content in real- time. This can be useful for training simulations, visual perception tests, or performing a virtual walk-throughs in a 3D
  • 30. 30 environment. Motion capture technology is frequently used in digital puppetry systems to drive computer generated characters in real-time.  Gait analysis is the major application of motion capture in clinical medicine. Techniques allow clinicians to evaluate human motion across several biometric factors, often while streaming this information live into analytical software.  During the filming of James Cameron’s Avatar all of the scenes involving this process where directed in real time using a screen which converted the actor setup with the motion costume into what they would look like in the movie making it easier for Cameron to direct the movie as it would be seen by the viewer. This method allowed Cameron to view the scenes from many more views and angles not possible from a pre- rendered animation. He was so proud of his pioneering methods he even invited Steven Spielberg and George Lucas on set to view him in action. 9. CONCLUSION Although the motion capture requires some technical means, we can quite get what to do it yourself at home in a reasonable cost that can make your own short film. Motion capture is a major step forward in the field of cinema as you can reprocess the image in a more simple, in fact, it is easier to modify an image captured a classic scene, all although this is too expensive. But it is also a major asset in medicine, for example, it can be used to measure the benefit of a transaction via a recording of the movement of the patient before and after the operation (such as in the case of the application prosthesis, or simply at a medical classic (in the future perhaps).
  • 31. 31 10. REFERENCES • http://en.wikipedia.org/wiki/Motion_capture • http://www.siggraph.org/education/materials/HyperGraph/animation/char acter_animation /motion capture/history1.htm • http://www.metamotion.com/motion-capture/motion-capture.htm • http://accad.osu.edu/research/mocap/mocap_home.htm • http://instruct1.cit.cornell.edu/courses/ee476/FinalProjects/s2005/Motion _Capture_KHY 6_DCL34/Motion_Capture.htm • http://web.mit.edu/comm-forum/papers/furniss.html • http://www.mastudios.com/index.html •http://www.youtube.com/watch?v=V0yT8mwg9n •http://www.postmagazine.com/ME2/dirmod.asp?sid=&nm=&type=Publis hing&mod=Pu blications::Article&mid=8F3A7027421841978F18BE895F87F791&tier=4&id =C715B81 DD6674D62BD666D304D2E8D0B
  • 32. 32