• Save
Motion capture technology
Upcoming SlideShare
Loading in...5
×
 

Motion capture technology

on

  • 9,991 views

 

Statistics

Views

Total Views
9,991
Views on SlideShare
9,991
Embed Views
0

Actions

Likes
13
Downloads
0
Comments
3

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft Word

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • respected sir can you please mail this document to me... please sir...
    Are you sure you want to
    Your message goes here
    Processing…
  • hai anvesh garu mail me ur mocap document
    sumasiloju@gmail.com plz :)
    Are you sure you want to
    Your message goes here
    Processing…
  • hai
    i want ur ppt and document brother pls mail 2 dis id: mramu.369@gmail.com
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Motion capture technology Motion capture technology Document Transcript

  • MOTION CAPTURE TECHNOLOGYA Technical seminar report submitted in partial fulfillment of the Requirement for the award of The Graduate Degree BACHELOR OF TECHNOLOGY IN ELECTRONICS AND COMMUNICATIONS Submitted by S.SRIKANTH (09311A0431) Department of Electronics and Communication Engineering Sreenidhi Institute of science and Technology Yamnampet, Ghatkesar, Hyderabad-501301 2012-2013.
  • SREENIDHI INSTITUTE OF SCIENCE &TECHNOLOGY Yamnampet, Ghatkesar, Hyderabad-501301 DEPARTMENT OF ELECTRONICS & COMMUNICATION ENGINEERING CERTIFICATE This is to certify that the Technical Seminar entitled “MOTION CAPTURETECHNOLOGY” given by S.SRIKANTH (09311A0431) in partial fulfillment of therequirement for the award of the graduate degree of BACHELOR OF TECHNOLOGY inELECTRONICS AND COMMUNICATION at the Sreenidhi Institute of Scienceand Technology. (Signature) (signature)Seminar Coordinator Dr. S.P.Venu Madhava RaoMrs V.SUDHA RANI, Head of the Department,Associate professor, Department of E.C.E,.Department of E.C.E, SNIST.SNIST.
  • ABSTRACT:Motion capture, motion tracking, or mocap are the terms used to describe the process ofrecording movement and translating that movement onto a digital model, it is used in military,entertainment, sports, and medical application. In film making it refers to recording actions ofhuman actors, and using that information to animate digital character models in 3D animation.When it includes face fingers and captures subtle expressions, it is often referred to asperformance capture. Motion capture is recording of human body movements (or other movements) forimmediate or delayed analysis and playback. The information captured can be as general as thesimple position of the body in space or as complex as the deformations of face and musclemasses. Motion capture for computer character animation involves the mapping of humanmotion on to the motion of a computer character. The mapping can be direct such as human armmotion controlling characters arm motion, or indirect such as human hand and finger patternscontrolling a characters skin color or emotional state. The end product gives the effect of animated character acting directly with human actors.Motion capturing techniques are very effective, but the computer processing needs much humanintervention, and if there is any error in data, you can find it more effective to re-shoot the wholescene rather than correct the data. However, motion capture technology is so much moreeffective and realistic than traditional techniques, and ultimately less time consuming, that itsfuture looks assured in movies and in video games.
  • TABLE OF CONTENTSChap no. Title Page No.1. INTRODUCTION 12. HISTORY OF MOTION CAPTURE 33. DIFFERENT TYPES OF MOTION CAPTURE 4 3.1 Mechanical motion capture 3.2 Magnetic motion capture 3.3 Optical motion capture4. METHODS AND SYSTEMS 5 4.1 Optical 4.2 Non Optical5. HUMAN MOCAP 86. ADVANTAGES 147. DISADVANTAGES 178. APPLICATIONS 189. CONCLUSION 2310. REFERENCES 24
  • 1. IntroductionMotion capture, motion tracking, or mocap are terms used to describe the process of recordingmovement and translating that movement onto a digital model. It is used in military,entertainment, sports, medical applications and for validation of computer vision and robotics. Infilmmaking it refers to recording actions of human actors, and using that information to animatedigital character models in 2D or 3D computer animation. When it includes face, fingers andcaptures subtle expressions, it is often referred to as performance capture. In motion capturesessions, movements of one or more actors are sampled many times per second, although withmost techniques (recent developments from Weta use images for 2D motion capture and projectinto 3D) motion capture records only the movements of the actor, not his/her visual appearance.This animation data is mapped to a 3D model so that the model performs the same actions as theactor. This is comparable to the older technique of rotoscope, such as the 1978 The Lord of theRings animated film where the visual appearance of the motion of an actor was filmed, then thefilm used as a guide for the frame by frame motion of a hand-drawn animated character.2. History of motion captureThe use of motion capture to animate characters on computers is relatively recent, it started inthe 1970s, and now just beginning to spread. Motion capture (collection of movement) isrecording the movements of the human body (or any other movement) for immediate analysis ordecals. The captured information can be as simple as catching body position in space or ascomplex as a capture of the face and the deformation of the muscles. The captured motion can beexported to various forms like bvh, bip, fbx etc., which can be used to animate 3d characters in3ds max, maya, poser, iclone, blender etc, You can download free motion capture data from thisblog. Motion capture for animation is the superposition of human movement on their virtualidentities. This capture can be direct, such as the animation arm of a virtual function ofmovement of an arm, or indirect, such as that of a human hand with a more thorough as theeffect of light or color. The idea of copying human motion is of course not new. To make themost convincing human movement, in "Snow White", Disney studios design animation film on afilm, play or real players. This method called "rotoscoping" has been used successfully sincethen. In 1970, when he began to be possible to animate characters by computer, Animationer
  • have adapted traditional techniques, including the "rotoscoping".Today, technology is catching is good and diverse, we can classify them into three broadcategories: Mechanical motion capture The optical motion capture The magnetic motion captureNow although this technique is effective, it still contains some problems (weight, cost, ...). Butagainst any doubt that the motion capture will become one of the basic tools of animation.3. The different types of motion capture3.1 Mechanical Motion captureThis technique of motion capture is achieved through the use of an exoskeleton. Each joint isthen connected to an angular encoder. The value of movement of each encoder (rotation etc. ..) isrecorded by a computer that by knowing the relative position encoders (and therefore joints) canrebuild these movements on the screen using software. An offset is applied to each encoder,because it is very difficult to match exactly their position with that of the real relationship (andespecially in the case of human movements). Figure 3.1
  • 3.1.1 Advantages and DisadvantagesThis technique offers high precision and it has the advantage of not being influenced by externalfactors (such as quality or the number of cameras for optical mocap).But the catch is limited by mechanical constraints related to the implementation of the encodersand the exoskeleton. It should be noted that the exoskeleton generally use wired connections toconnect the encoders to the computer. For example, there is much more difficult to move with afairly heavy exoskeleton and connected to a large number of simple son with small reflectivespheres: the freedom of movement is rather limited.The accuracy of reproduction of the movement depends on the position encoders and modelingof the skeleton. It must match the size of the exoskeleton at each morphology. The bigdisadvantage comes from the coders themselves because if they are of great precision betweenthem it can not move the object to capture in a so true. In effect, then use the methods of opticalpositioning to place the animation in a decor. Finally, each object to animate to need anexoskeleton over it is quite complicated to measure the interaction of several exoskeleton.Thereby bringing about a scene involving several people will be very difficult to implement.3.2 The magnetic motion captureMagnetic motion capture is done through a field of electro-Magenta is introduced in whichsensors are coils of sensors electriques. Les son are represented on a place mark in 3 axes x, y, z.To determine their position on the capture field disturbance created by a son through an antenna,then we can know its orientation. Figure 3.2
  • 3.2.1 Advantages and disadvantagesThe advantage of this method is that data captured is accurate and no further calculations,excluding from the calculation of position, is useful in handling. But any metal object disturbsthe magnetic field and distorts the data.3.3 Optical Motion CaptureThe capture is based on optical shooting several synchronized cameras, the synthesis ofcoordinates (x, y) of the same object from different angles, allows to deduce the coordinates (x,y, z). This method involves the consideration of complex problems such as optical parallax,distortion lens used, etc.. The signal thus undergoes many interpolations. However, a correctcalibration of these parameters allows a high accuracy of data collected. Figure 3.3The operating principle is similar to radar: the cameras emit radiation (usually red and / orinfrared), reflected by the markers (whose surface is composed of ultra-reflecting material) andthen returned to the same cameras. These are sensitive to one type of wavelength and viewingthe markers in the form of white spots videos (or grayscale for the latest cameras). Checking the
  • information of each camera (2 cameras therefore minimum) to determine the position of themarker in the virtual space.4. Methods and SystemsMotion tracking or motion capture started as a photogrammetric analysis tool in biomechanicsresearch in the 1970s and 1980s, and expanded into education, training, sports and recentlycomputer animation for television, cinema and video games as the technology matured. Aperformer wears markers near each joint to identify the motion by the positions or anglesbetween the markers. Acoustic, inertial, LED, magnetic or reflective markers, or combinations ofany of these, are tracked, optimally at least two times the frequency rate of the desired motion, tosubmillimeter positions.4.1 OPTICAL SYSTEMSOptical systems utilize data captured from image sensors to triangulate the 3D position of asubject between one or more cameras calibrated to provide overlapping projections. Dataacquisition is traditionally implemented using special markers attached to an actor; however,more recent systems are able to generate accurate data by tracking surface features identifieddynamically for each particular subject. Tracking a large number of performers or expanding thecapture area is accomplished by the addition of more cameras. These systems produce data with3 degrees of freedom for each marker, and rotational information must be inferred from therelative orientation of three or more markers; for instance shoulder, elbow and wrist markersproviding the angle of the elbow.
  • 4.1.1 PASSIVE MARKERS Figure 4.1 A dancer wearing a suit used in an optical motion capture system Figure 4.2 Several markers are placed at specific points on an actors face during facial optical motion capturePassive optical system use markers coated with a retro reflective material to reflect light backthat is generated near the cameras lens. The cameras threshold can be adjusted so only the brightreflective markers will be sampled, ignoring skin and fabric.The centroid of the marker is estimated as a position within the 2 dimensional image that iscaptured. The grayscale value of each pixel can be used to provide sub-pixel accuracy by findingthe centroid of the Gaussian.An object with markers attached at known positions is used to calibrate the cameras and obtaintheir positions and the lens distortion of each camera is measured. Providing two calibrated
  • cameras see a marker, a 3 dimensional fix can be obtained. Typically a system will consist ofaround 6 to 24 cameras. Systems of over three hundred cameras exist to try to reduce markerswap. Extra cameras are required for full coverage around the capture subject and multiplesubjects.Vendors have constraint software to reduce problems from marker swapping since all markersappear identical. Unlike active marker systems and magnetic systems, passive systems do notrequire the user to wear wires or electronic equipment. Instead, hundreds of rubber balls areattached with reflective tape, which needs to be replaced periodically. The markers are usuallyattached directly to the skin (as in biomechanics), or they are velcroed to a performer wearing afull body spandex/lycra suit designed specifically for motion capture. This type of system cancapture large numbers of markers at frame rates as high as 2000fps. The frame rate for a givensystem is often balanced between resolution and speed: a 4-megapixel system normally runs at370 hertz, but can reduce the resolution to .3 megapixels and then run at 2000 hertz. Typicalsystems are $100,000 for 4-megapixel 360-hertz systems, and $50,000 for .3-megapixel 120-hertz systems.4.1.2 ACTIVE MARKERActive optical systems triangulate positions by illuminating one LED at a time very quickly ormultiple LEDs with software to identify them by their relative positions, somewhat akin tocelestial navigation. Rather than reflecting light back that is generated externally, the markersthemselves are powered to emit their own light. Since Inverse Square law provides 1/4 the powerat 2 times the distance, this can increase the distances and volume for capture.The TV series ("Stargate SG1") episode was produced using an active optical system for theVFX. The actor had to walk around props that would make motion capture difficult for othernon-active optical systems.ILM used active Markers in Van Helsing to allow capture of the Harpies on very large sets. Thepower to each marker can be provided sequentially in phase with the capture system providing aunique identification of each marker for a given capture frame at a cost to the resultant framerate. The ability to identify each marker in this manner is useful in realtime applications. The
  • alternative method of identifying markers is to do it algorithmically requiring extra processing ofthe data.4.1.3 TIME MODULATED ACTIVE MARKER Figure 4.3A high-resolution active marker system with 3,600 × 3,600 resolution at 480 hertz providing realtime sub millimeter positions.Active marker systems can further be refined by strobing one marker on at a time, or trackingmultiple markers over time and modulating the amplitude or pulse width to provide marker ID.12 megapixel spatial resolution modulated systems show more subtle movements than 4megapixel optical systems by having both higher spatial and temporal resolution. Directors cansee the actors performance in real time, and watch the results on the mocap driven CG character.The unique marker IDs reduce the turnaround, by eliminating marker swapping and providingmuch cleaner data than other technologies. LEDs with onboard processing and a radiosynchronization allow motion capture outdoors in direct sunlight, while capturing at 480 framesper second due to a high speed electronic shutter. Computer processing of modulated IDs allowsless hand cleanup or filtered results for lower operational costs. This higher accuracy andresolution requires more processing than passive technologies, but the additional processing isdone at the camera to improve resolution via a subpixel or centroid processing, providing bothhigh resolution and high speed. These motion capture systems are typically under $50,000 for aneight camera, 12 megapixel spatial resolution 480 hertz system with one actor.
  • Figure 4.4IR sensors can compute their location when lit by mobile multi-LED emitters, e.g. in a movingcar. With Id per marker, these sensor tags can be worn under clothing and tracked at 500 Hz inbroad daylight.4.1.4 SEMI-PASSIVE IMPERCEPTIBLE MARKEROne can reverse the traditional approach based on high speed cameras. Systems use inexpensivemulti-LED high speed projectors. The specially built multi-LED IR projectors optically encodethe space. Instead of retro-reflective or active light emitting diode (LED) markers, the systemuses photosensitive marker tags to decode the optical signals. By attaching tags with photosensors to scene points, the tags can compute not only their own locations of each point, but alsotheir own orientation, incident illumination, and reflectance.These tracking tags that work in natural lighting conditions and can be imperceptibly embeddedin attire or other objects. The system supports an unlimited number of tags in a scene, with eachtag uniquely identified to eliminate marker reacquisition issues. Since the system eliminates ahigh speed camera and the corresponding high-speed image stream, it requires significantlylower data bandwidth. The tags also provide incident illumination data which can be used tomatch scene lighting when inserting synthetic elements. The technique appears ideal for on-setmotion capture or real-time broadcasting of virtual sets but has yet to be proven.4.1.5 MARKER LESSEmerging techniques and research in computer vision are leading to the rapid development of themarker less approach to motion capture. Marker less systems such as those developed atStanford, University of Maryland, MIT, and Max Planck Institute, do not require subjects towear special equipment for tracking. Special computer algorithms are designed to allow the
  • system to analyze multiple streams of optical input and identify human forms, breaking themdown into constituent parts for tracking. Applications of this technology extend deeply intopopular imagination about the future of computing technology. Several commercial solutions formarker less motion capture have also been introduced. Products currently under developmentinclude Microsofts Kinect system for PC and console systems.4.2 NON-OPTICAL SYSTEMS4.2.1 INERTIAL SYSTEMSInertial Motion Capture technology is based on miniature inertial sensors, biomechanical modelsand sensor fusion algorithms. The motion data of the inertial sensors (inertial guidance system) isoften transmitted wirelessly to a computer, where the motion is recorded or viewed. Most inertialsystems use gyroscopes to measure rotational rates. These rotations are translated to a skeleton inthe software. Much like optical markers, the more gyros the more natural the data. No externalcameras, emitters or markers are needed for relative motions. Inertial mocap systems capture thefull six degrees of freedom body motion of a human in real-time. Benefits of using Inertialsystems include: no solving, portability, and large capture areas. Disadvantages include lowerpositional accuracy and positional drift which can compound over time.These systems are similar to the Wii controllers but are more sensitive and have greaterresolution and update rates. They can accurately measure the direction to the ground to within adegree. The popularity of inertial systems is rising amongst independent game developers,mainly because of the quick and easy set up resulting in a fast pipeline. A range of suits are nowavailable from various manufacturers and base prices range from $25,000 to $80,000 USD.4.2.2 MECHANICAL MOTIONMechanical motion capture systems directly track body joint angles and are often referred to asexo-skeleton motion capture systems, due to the way the sensors are attached to the body. Aperformer attaches the skeletal-like structure to their body and as they move so do the articulatedmechanical parts, measuring the performer’s relative motion. Mechanical motion capturesystems are real-time, relatively low-cost, free-of-occlusion, and wireless (untethered) systems
  • that have unlimited capture volume. Typically, they are rigid structures of jointed, straight metalor plastic rods linked together with potentiometers that articulate at the joints of the body. Thesesuits tend to be in the $25,000 to $75,000 range plus an external absolute positioning system.4.2.3 MAGNETIC SYSTEMSMagnetic systems calculate position and orientation by the relative magnetic flux of threeorthogonal coils on both the transmitter and each receiver. The relative intensity of the voltage orcurrent of the three coils allows these systems to calculate both range and orientation bymeticulously mapping the tracking volume. The sensor output is 6DOF, which provides usefulresults obtained with two-thirds the number of markers required in optical systems; one on upperarm and one on lower arm for elbow position and angle. The markers are not occluded bynonmetallic objects but are susceptible to magnetic and electrical interference from metal objectsin the environment, like rebar (steel reinforcing bars in concrete) or wiring, which affect themagnetic field, and electrical sources such as monitors, lights, cables and computers. The sensorresponse is nonlinear, especially toward edges of the capture area. The wiring from the sensorstends to preclude extreme performance movements. The capture volumes for magnetic systemsare dramatically smaller than they are for optical systems. With the magnetic systems, there is adistinction between “AC” and “DC” systems: one uses square pulses, the other uses sine wavepulse.5. HUMAN MOCAPThe science of human motion analysis is fascinating because of its highly interdisciplinary natureand wide range of applications. Histories of science usually begin with the ancient Greeks, whofirst left a record of human inquiry concerning the nature of the world in relationship to ourpowers of perception. Aristotle (384-322 B.C.) might be considered the first biomechanician. Hewrote the book called ’De Motu Animalium’ - On the Movement of Animals. He not only saw
  • animals’ bodies as mechanical systems, but pursued such questions as the physiologicaldifference between imagining performing an action and actually doing it. Figure 4.5Nearly two thousand years later, in his famous anatomic drawings, Leonardo da Vinci (1452-1519) sought to describe the mechanics of standing, walking up and down hill, rising from asitting position, and jumping. Galileo (1564-1643) followed a hundred years later with some ofthe earliest attempts to mathematically analyze physiologic function. Building on the work ofGalilei, Borelli (1608-1679) figured out the forces required for equilibrium in various joints ofthe human body well before Newton published the laws of motion. He also determined theposition of the human center of gravity, calculated and measured inspired and expired airvolumes, and showed that inspiration is muscle-driven and expiration is due to tissue elasticity.The early work of these pioneers of biomechanics was followed up by Newton (1642-1727),Bernoulli (1700-1782), Euler (1707-1783), Poiseuille (1799-1869), Young (1773-1829), andothers of equal fame. Muybridge (1830-1904) was the first photographer to dissect human andanimal motion (see figure at heading human motion analysis). This technique was first usedscientifically by Marey (1830-1904), who correlated ground reaction forces with movement andpioneered modern motion analysis. In the 20th century, many researchers and (biomedical)engineers contributed to an increasing knowledge of human kinematics and kinetics. This paperwill give a short overview of the technologies used in these fields.5.1 Human motion analysisMany different disciplines use motion analysis systems to capture movement and posture of thehuman body. Basic scientists seek a better understanding of the mechanisms that are used totranslate muscular contractions about articulating joints into functional accomplishment, e.g.
  • walking. Increasingly, researchers endeavor to better appreciate the relationship between thehuman motor control system and gait dynamics. Figure 5.1 Figure 5.2In the realm of clinical gait analysis, medical professionals apply an evolving knowledge base inthe interpretation of the walking patterns of impaired ambulators for the planning of treatmentprotocols, e.g. orthotic prescription and surgical intervention and allow the clinician to determinethe extent to which an individual’s gait pattern has been affected by an already diagnoseddisorder. With respect to sports, athletes and their coaches use motion analysis techniques in aceaseless quest for improvements in performance while avoiding injury. The use of motioncapture for computer character animation or virtual reality (VR) applications is relatively new.The information captured can be as general as the position of the body in space or as complex asthe deformations of the face and muscle masses. The mapping can be direct, such as human armmotion controlling a character’s arm motion, or indirect, such as human hand and finger patternscontrolling a character’s skin color or emotional state. The idea of copying human motion foranimated characters is, of course, not new. To get convincing motion for the human characters inSnow White, Disney studios traced animation over film footage of live actors playing out thescenes. This method, called rotoscoping, has been successfully used for human characters. In thelate’70’s, when it began to be feasible to animate characters by computer, animators adaptedtraditional techniques, including rotoscoping.Generally, motion analysis data collection protocols, measurement precision, and data reductionmodels have been developed to meet the requirements for their specific settings. For example,sport assessments generally require higher data acquisition rates because of increased velocitiescompared to normal walking. In VR applications, real-time tracking is essential for a realisticexperience of the user, so the time lag should be kept to a minimum. Years of technological
  • development has resulted into many systems can be categorized in mechanical, optical,magnetic, acoustic and inertial trackers. The human body is often considered as a system of rigidlinks connected by joints. Human body parts are not actually rigid structures, but they arecustomarily treated as such during studies of human motion.Mechanical trackers utilize rigid or flexible goniometers which are worn by the user.Goniometers within the skeleton linkages have a general correspondence to the joints of the user.These angle measuring devices provide joint angle data to kinematic algorithms which are usedto determine body posture. Attachment of the body-based linkages as well as the positioning ofthe goniometers present several problems. The soft tissue of the body allows the position of thelinkages relative to the body to change as motion occurs. Even without these changes, alignmentof the goniometer with body joints is difficult. This is specifically true for multiple degree offreedom (DOF) joints, like the shoulder. Due to variations in anthropometric measurements,body-based systems must be recalibrated for each user. Figure 5.3 Optical sensing encompasses a large and varying collection of technologies. Image-based systems determine position by using multiple cameras to track predetermined points (markers) on the subject’s body segments, aligned with specific bony landmarks. Position is estimatedthrough the use of multiple 2D images of the working volume. Stereometric techniques correlatecommon tracking points on the tracked objects in each image and use this information along with knowledge concerning the relationship between each of the images and camera parameters to calculate position. The markers can either be passive (reflective) or active (light emitting).Reflective systems use infrared (IR) LED’s mounted around the camera lens, along with IR pass filters placed over the camera lens and measure the light reflected from the markers. Optical systems based on pulsed-LED’s measure the infrared light emitted by the LED’s placed on the
  • body segments. Also camera tracking of natural objects without the aid of markers is possible, but in general less accurate. It is largely based on computer vision techniques of patternrecognition and often requires high computational resources. Structured light systems use lasers or beamed light to create a plane of light that is swept across the image. They are more appropriate for mapping applications than dynamic tracking of human body motion. Opticalsystems suffer from occlusion (line of sight) problems whenever a required light path is blocked.Interference from other light sources or reflections may also be a problem which can result in so- called ghost markers. Figure 5.4 Magnetic motion capture systems utilize sensors placed on the body to measure the low- frequency magnetic fields generated by a transmitter source. The transmitter source isconstructed of three perpendicular coils that emit a magnetic field when a current is applied. Thecurrent is sent to these coils in a sequence that creates three mutually perpendicular fields during each measurement cycle. The 3D sensors measure the strength of those fields which isproportional to the distance of each coil from the field emitter assembly. The sensors and source are connected to a processor that calculates position and orientation of each sensor based on itsnine measured field values. Magnetic systems do not suffer from line of sight problems because the human body is transparent for the used magnetic fields. However, the shortcomings ofmagnetic tracking systems are directly related to the physical characteristics of magnetic fields. Magnetic fields decrease in power rapidly as the distance from the generating source increases and so they can easily be disturbed by (ferro) magnetic materials within the measurement volume. Acoustic tracking systems use ultrasonic pulses and can determine position through
  • either time-of-flight of the pulses and triangulation or phasecoherence. Both outside-in and inside-out implementations are possible, which means the transmitter can either be placed on a body segment or fixed in the measurement volume. The physics of sound limit the accuracy,update rate and range of acoustic tracking systems. A clear line of sight must be maintained and tracking can be disturbed by reflections of the sound. Inertial sensors use the property of bodies to maintain constant translational and rotationalvelocity, unless disturbed by forces or torques, respectively. The vestibular system, located in the inner ear, is a biological 3D inertial sensor. It can sense angular motion as well as linear acceleration of the head. The vestibular system is important for maintaining balance and stabilization of the eyes relative to the environment. Practical inertial tracking is made possible by advances in miniaturized and micro machined sensor technologies, particularly in silicon accelerometers and rate sensors. A rate gyroscope measures angular velocity, and if integrated over time provides the change in angle with respect to an initially known angle. Figure 5.5An accelerometer measures accelerations, including gravitational acceleration g. If the angle ofthe sensor with respect to the vertical is known, the gravity component can be removed and bynumerical integration, velocity and position can be determined. Noise and bias errors associatedwith small and inexpensive sensors make it impractical to track orientation and position for longtime periods if no compensation is applied. By combining the signals from the inertial sensorswith aiding/complementary sensors and using knowledge about their signal characteristics, driftand other errors can be minimized.
  • 5.2 Ambulatory trackingCommercial optical systems such as Vicon (reflective markers) or Optotrak (active markers) areoften considered as a standard’ in human motion analysis. Although these systems provideaccurate position information, there are some important limitations. The most important factorsare the high costs, occlusion problems and limited measurement volume. The use of a specializedlaboratory with fixed equipment impedes many applications, like monitoring of daily lifeactivities, control of prosthetics or assessment of workload in ergonomic studies. In the past fewyears, the health care system trend toward early discharge to monitor and train patients in theirown environment. This has promoted a large development of non-invasive portable and wearablesystems. Inertial sensors have been successfully applied for such clinical measurements outsidethe lab. Moreover, it has opened many possibilities to capture motion data for athletes oranimation purposes without the need for a studio.The orientation obtained by present-day micromachined gyroscopes typically shows anincreasing error of degrees per minute. For accurate and drift free orientation estimation Xsenshas developed an algorithm to combine the signals from 3D gyroscopes, accelerometers andmagnetometers. Accelerometers are used to determine the direction of the local vertical bysensing acceleration due to gravity. Magnetic sensors provide stability in the horizontal plane bysensing the direction of the earth magnetic field like a compass. Data from these complementarysensors are used to eliminate drift by continuous correction of the orientation obtained by angularrate sensor data. This combination is also known as an attitude and heading reference system(AHRS).For human motion tracking, the inertial motion trackers are placed on each body segment to betracked. The inertial motion trackers give absolute orientation estimates which are also used tocalculate the 3D linear accelerations in world coordinates which in turn give translation estimatesof the body segments.
  • Figure 5.6Since the rotation from sensor to body segment and its position with respect to the axes ofrotation are initially unknown, a calibration procedure is necessary. An advanced articulatedbody model constraints the movements of segments with respect to each other and eliminates anyintegration drift.5.3 INERTIAL SENSORSA single axis accelerometer consists of a mass, suspended by a spring in a housing. Springs(within their linear region) are governed by a physical principle known as Hooke’s law. Hooke’slaw states that a spring will exhibit a restoring force which is proportional to the amount it hasbeen expanded or compressed. Specifically, F = kx, where k is the constant of proportionalitybetween displacement x and force F. The other important physical principle is that of Newton’ssecond law of motion which states that a force operating on a mass which is accelerated willexhibit a force with a magnitude F = ma. This force causes the mass to either compress orexpand the spring under the constraint that F = ma = kx. Hence an acceleration a will cause themass to be displaced by x = ma/k or, if we observe a displacement of x, we know the mass hasundergone an acceleration of a = kx/m. In this way, the problem of measuring acceleration hasbeen turned into one of measuring the displacement of a mass connected to a spring. In order tomeasure multiple axes of acceleration, this system needs to be duplicated along each of therequired axes. Gyroscopes are instruments that are used to measure angular motion. There aretwo broad categories: (1) mechanical gyroscopes and (2) optical gyroscopes. Within both ofthese categories, there are many different types available.
  • Figure 5.7The first mechanical gyroscope was built by Foucault in 1852, as a gimbaled wheel that stayedfixed in space due to angular momentum while the platform rotated around it. Mechanicalgyroscopes operate on the basis of conservation of angular momentum by sensing the change indirection of an angular momentum. According to Newton’s second law, the angular momentumof a body will remain unchanged unless it is acted upon by a torque. The fundamental equationdescribing the behavior of the gyroscope iswhere the vectors tau and L are, the torque on the gyroscope and its angular momentum,respectively . The scalar I is its moment of inertia, the vector omega is its angular velocity, andthe vector alpha is its angular acceleration. Figure 5.8Gimbaled and laser gyroscopes are not suitable for human motion analysis due to their large sizeand high costs. Over the last few years, micro electromechanical machined (MEMS) inertialsensors have become more available. Vibrating mass gyroscopes are small, inexpensive and havelow power requirements, making them ideal for human movement analysis. A vibrating element(vibrating resonator), when rotated, is subjected to the Coriolis effect that causes secondary
  • vibration orthogonal to the original vibrating direction. By sensing the secondary vibration, therate of turn can be detected. The Coriolis force is given by:where m is the mass, v the momentary speed of the mass relative to the moving object to whichit is attached and omega the angular velocity of that object. Various micro machined geometriesare available, of which many use the piezo-electric effect for vibration exert and detection.5.4 Sensor fusionThe traditional application area of inertial sensors is navigation as well as guidance andstabilization of military systems. Position, velocity and attitude are obtained using accurate, butlarge gyroscopes and accelerometers, in combination with other measurement devices such asGPS, radar or a baro altimeter. Generally, signals from these devices are fused using a Kalmanfilter to obtain quantities of interest (see figure below). The Kalman filter is useful for combiningdata from several different indirect and noisy measurements. It weights the sources ofinformation appropriately with knowledge about the signal characteristics based on their modelsto make the best use of all the data from each of the sensors. There is no perfect sensor; eachtype has its strong and weak points. The idea behind sensor fusion is that characteristics of onetype of sensor are used to overcome the limitations of another sensor. For example, magneticsensors are used as a reference to prevent the gyroscope integration drift about the vertical axis inthe orientation estimates. However, iron and other magnetic materials will disturb the localmagnetic field and as a consequence, the orientation estimate. The spatial and temporal featuresof magnetic disturbances will be different from those related to gyroscope drift errors.
  • Figure 5.8This figure: Complementary Kalman filter structure for position and orientation estimates combining inertial andaiding measurements. The signals from the IMU (a − g and w) provide the input for the INS. By double integrationof the accelerations, the position is estimated at a high frequency. At a feasible lower frequency, the aiding systemprovides position estimates. The difference between the inertial and aiding estimates is delivered to the Kalmanfilter. Based on the system model the Kalman filters estimates the propagation of the errors. The outputs of the filterare fed back to correct the position, velocity, acceleration and orientation estimates.Using this a priori knowledge, the effects of both drift and disturbances can be minimized. Theinertial sensors of the inertial navigation system (INS) can be mounted on vehicles in such a waythat they stay leveled and pointed in a fixed direction. This system relies on a set of gimbals andsensors attached on three axes to monitor the angles at all times. Another type of INS is thestrapdown system that eliminates the use of gimbals which is >suitable for human motionanalysis. In this case, the gyros and accelerometers are mounted directly to the structure of thevehicle or strapped on the body segment. The measurements are made in reference to the localaxes of roll, pitch, and heading (or yaw). The clinical reference system provides anatomicallymeaningful definitions of main segmental movements (e.g. flexion-extension, abduction-adduction or supination-pronation). Figure 5.9 Figure 5.10
  • 6. AdvantagesMotion capture offers several advantages over traditional computer animation of a 3D model: More rapid, even real time results can be obtained. In entertainment applications this can reduce the costs of key frame-based animation. For example: Hand Over The amount of work does not vary with the complexity or length of the performance to the same degree as when using traditional techniques. This allows many tests to be done with different styles or deliveries. Complex movement and realistic physical interactions such as secondary motions, weight and exchange of forces can be easily recreated in a physically accurate manner. The amount of animation data that can be produced within a given time is extremely large when compared to traditional animation techniques. This contributes to both cost effectiveness and meeting production deadlines. Potential for free software and third party solutions reducing its costs7. Disadvantages Specific hardware and special programs are required to obtain and process the data. The cost of the software, equipment and personnel required can potentially be prohibitive for small productions. The capture system may have specific requirements for the space it is operated in, depending on camera field of view or magnetic distortion. When problems occur it is easier to reshoot the scene rather than trying to manipulate the data. Only a few systems allow real time viewing of the data to decide if the take needs to be redone.
  • The initial results are limited to what can be performed within the capture volume without extra editing of the data. Movement that does not follow the laws of physics generally cannot be captured. Traditional animation techniques, such as added emphasis on anticipation and follow through, secondary motion or manipulating the shape of the character, as with squash and stretch animation techniques, must be added later. If the computer model has different proportions from the capture subject, artifacts may occur. For example, if a cartoon character has large, over-sized hands, these may intersect the characters body if the human performer is not careful with their physical motion.8. Applications Video games often use motion capture to animate athletes, martial artists, and other in- game characters. This has been done since the Atari Jaguar CD-based game Highlander: The Last of the MacLeods, released in 1995. Movies use motion capture for CG effects, in some cases replacing traditional cel animation, and for completely computer-generated creatures, such as Jar Jar Binks, Gollum, The Mummy, King Kong, and the Navi from the film Avatar. Sinbad: Beyond the Veil of Mists was the first movie made primarily with motion capture, although many character animators also worked on the film. In producing entire feature films with computer animation, the industry is currently split between studios that use motion capture, and studios that do not. Out of the three nominees for the 2006 Academy Award for Best Animated Feature, two of the nominees (Monster House and the winner Happy Feet) used motion capture, and only Disney·Pixars Cars was animated without motion capture. In the ending credits of Pixars film Ratatouille, a stamp appears labelling the film as "100% Pure Animation -- No Motion Capture!" Motion capture has begun to be used extensively to produce films which attempt to simulate or approximate the look of live-action cinema, with nearly photorealistic digital
  • character models. The Polar Express used motion capture to allow Tom Hanks to perform as several distinct digital characters (in which he also provided the voices). The 2007 adaptation of the saga Beowulf animated digital characters whose appearances were based in part on the actors who provided their motions and voices. James Camerons Avatar used this technique to create the Navi that inhabit Pandora. The Walt Disney Company has announced that it will distribute Robert Zemeckiss A Christmas Carol and Tim Burtons Alice in Wonderland using this technique. Disney has also acquired Zemeckis ImageMovers Digital that produces motion capture films. Television series produced entirely with motion capture animation include Laflaque in Canada, Sprookjesboom and Cafe de Wereld in The Netherlands, and Headcases in the UK. Virtual Reality and Augmented Reality allow users to interact with digital content in real- time. This can be useful for training simulations, visual perception tests, or performing a virtual walk-throughs in a 3D environment. Motion capture technology is frequently used in digital puppetry systems to drive computer generated characters in real-time. Gait analysis is the major application of motion capture in clinical medicine. Techniques allow clinicians to evaluate human motion across several biometric factors, often while streaming this information live into analytical software. During the filming of James Camerons Avatar all of the scenes involving this process where directed in real time using a screen which converted the actor setup with the motion costume into what they would look like in the movie making it easier for Cameron to direct the movie as it would be seen by the viewer. This method allowed Cameron to view the scenes from many more views and angles not possible from a pre- rendered animation. He was so proud of his pioneering methods he even invited Steven Spielberg and George Lucas on set to view him in action.9. ConclusionAlthough the motion capture requires some technical means, we can quite get what to do ityourself at home in a reasonable cost that can make your own short film.Motion capture is a major step forward in the field of cinema as you can reprocess the image in amore simple, in fact, it is easier to modify an image captured a classic scene, all although this is
  • too expensive. But it is also a major asset in medicine, for example, it can be used to measure thebenefit of a transaction via a recording of the movement of the patient before and after theoperation (such as in the case of the application prosthesis, or simply at a medical classic (in thefuture perhaps).10. REFERENCES • http://en.wikipedia.org/wiki/Motion_capture • http://www.siggraph.org/education/materials/HyperGraph/animation/character_animation /motion_capture/history1.htm • http://www.cgw.com/ME2/dirmod.asp?sid=&nm=&type=Publishing&mod=Publications %3A%3AArticle&mid=8F3A7027421841978F18BE895F87F791&tier=4&id=A8B4004 315A84A5089255A2E366E2E78 • http://www.metamotion.com/motion-capture/motion-capture.htm • http://accad.osu.edu/research/mocap/mocap_home.htm • http://www.postmagazine.com/ME2/dirmod.asp?sid=&nm=&type=Publishing&mod=Pu blications::Article&mid=8F3A7027421841978F18BE895F87F791&tier=4&id=C715B81 DD6674D62BD666D304D2E8D0B • http://instruct1.cit.cornell.edu/courses/ee476/FinalProjects/s2005/Motion_Capture_KHY 6_DCL34/Motion_Capture.htm • http://www.youtube.com/watch?v=V0yT8mwg9nc • http://web.mit.edu/comm-forum/papers/furniss.html • http://www.mastudios.com/index.html