Video stabilization

649
-1

Published on

Published in: Technology, Business
1 Comment
0 Likes
Statistics
Notes
  • technology canon

    http://web.canon.jp/imaging/lens/technology/index3.html#hybrid_is
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • Be the first to like this

No Downloads
Views
Total Views
649
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
37
Comments
1
Likes
0
Embeds 0
No embeds

No notes for slide

Video stabilization

  1. 1. Presented by:Soheila Sheikhbahaei1392Kharazmi University of TehranVIDEO STABILIZATION
  2. 2. OUTLINE• What is video stabilization?• Where is it useful?• How does it work?• What are the problems
  3. 3. DEFINITION• Most amateur videos are captured using hand-held cameras. They are often very shakyand difficult to watch.• Video stabilization techniques have been developed to smooth shaky camera motion invideos before viewing.• Stabilization is the process of estimating and compensating for the background imagemotion occurring due to the ego-motion of the camera.
  4. 4. ELIMINATING JITTER• Although this jitter can be eliminated by anchoring the binoculars on a tripod, this is notalways feasible.
  5. 5. EXAMPLEWang, Y.S., et al., Spatially and Temporally Optimized Video Stabilization. IEEE transactions on visualization and computer graphics, 2013.http://people.cs.nctu.edu.tw/~yushuen/VideoStabilization/
  6. 6. PARALLAX• Parallax is a displacement or difference in the apparent position of an object viewed alongtwo different lines of sight
  7. 7. BIOLOGICAL MOTIVATION: INSECT NAVIGATION• Insects have relatively small nervous system with very few neurons when compared to thehuman brain, they are still capable of complex tasks, such as safe landing, obstacleavoidance.• Behavioral research with insects suggest that insects primarily use visual information.• Insects have immobile eyes with fixed focal length. Moreover, they do not possessstereoscopic vision. Insect eyes possess inferior spatial acuity but their eyes sample theworld at a significantly higher rate than human eyes do.• The study can serve as a pure motivational tool indicating that such complex tasks, suchas stabilization, can be performed real-time, with the accuracy desired. Second, this studycan lead us into the paradigm of “active vision” or “purposive vision”.• In fact, several researchers have used such biologically inspired mechanisms for flightcontrol and obstacle avoidance.Al Bovik. The Essential Guide to Video Processing
  8. 8. BIOLOGICAL MOTIVATION: INSECT NAVIGATION• Bees that fly through holes tend to fly through the center of these holes. Bees, like mostother insects, cannot measure distances from surfaces by using stereoscopic vision.• Recent experiments have indicated that bees balance the image motion on the lateralportion of their two eyes as they fly through openings.• Bees were trained to fly in narrow tunnels with certain patterns on the side walls of thetunnels. It was shown in that bees tended to fly at the center of this tunnel when thepatterns on the side walls were stationary.• If one of these patterned side walls was moved in the direction of the bee’s flight, therebyreducing the image motion experienced by the bee on that side, then the bees movedcloser to that side wall. Similarly, when one of the patterned side walls was moved in thedirection opposite to the direction of the bee’s flight, the bee moved away from the movingwall.
  9. 9. BIOLOGICAL MOTIVATION: INSECT NAVIGATION• Collision avoidance is another task that is visually driven in most insects. When an insectapproaches an obstacle, its image expands on it’s eyes. Insects are sensitive to thisimage expansion and turn away from the direction in which the image expansion occurs,thereby avoiding collision with obstacles.
  10. 10. DIFFERENT ALGORITHMS• Depending on the type of scenario and the type of motion involved, we have differentalgorithms to achieve stabilization.• Presence of a dominant plane in the scene• Derotation of the image sequence• Mosaic construction• Presence of moving objects
  11. 11. CLASSIFICATION OF TECHNIQUES• Techniques are classified as two categories:• Feature-based methods: extract and match discrete features between frames andtrajectories of these features are fit to a global motion model.• Flow-based methods: optical flow of the image sequence is an intermediate quantitythat is used in determining the global motion.
  12. 12. PHASES OF STABILIZATION• In video stabilization, we need to analyze the image motion and obtain models for theglobal motion in image sequences.• Generally the process of stabilization have to go through two phases:• motion estimation• motion smoothing
  13. 13. CAMERA MODEL• The imaging geometry of a perspective camera:
  14. 14. EFFECT OF CAMERA MOTION• The effect of camera motion can be computed using projective geometry:
  15. 15. EFFECT OF CAMERA MOTION• Other popular global deformations mapping the projection of a point between two framesare the similarity:and affine transformations:
  16. 16. IMAGE FEATURES• The basic goal in feature-based motion estimation is to use features to find maps thatrelate the images taken from different view-points.• These maps are then used to estimate the image motion by computing the parameters ofa motion model.• Consider the case of pure rotation:• Though various lengths, ratios, and angles formed on the images are all different, thecross ratio remains the same. Given four collinear points A, B, C, and D on an image,R. Hartley and A. Zisserman. Multiple View Geometry in computer vision. Cambridge University Press, Cambridge, UK, 2000.
  17. 17. IMAGE FEATURES• this intuition leads to a map relating the two images.• Given four corresponding points in general position in the two images, we can map anypoint from one image to the other.• Now, any point F on ABE will map to point F´ such that the cross ratio is preserved.• This way one can map each point on one image to the other image. Such a map is calledhomography.
  18. 18. IMAGE FEATURES• In the case of planar scene:• x1, a point on first image plane, xp, the corresponding point on the real plane, x2, thecorresponding point on the second image plane.• Thus, homography H =H1H2 maps points from one image plane to the other.
  19. 19. IMAGE FEATURES• On the other hand, when there are depth variations in the scene, such a homographydoesn’t exist between images formed by camera translation.• In the case of depth variations, we can use structure from motion (SFM) approaches toestimate the motion of the camera.
  20. 20. FEATURE BASED ALGORITHMS• A number of features are extracted in each image and feature matching algorithms areused to establish correspondence between the images.• The motion parameters are found by first identifying the set of feature matches.
  21. 21. FEATURE TRAJECTORY SMOOTHING• Let the ith trajectory be where pi and m and n are the start and the end frames of Pi,respectively.• Our goal is to solve an optimization problem that can minimize the acceleration of Pi ineach frame while constraining the offsets of neighboring trajectories to be consistentwithin the input video.Wang, Y.S., et al., Spatially and Temporally Optimized Video Stabilization. IEEE transactions on visualization and computer graphics, 2013.
  22. 22. BEZIER CURVES• Bezier curves are used in computer graphics to produce curves which appear reasonablysmooth at all scales (as opposed to polygonal lines, which will not scale nicely) in whichthe interpolating polynomials depend on certain control points.
  23. 23. FEATURE TRAJECTORY SMOOTHING• each smoothed trajectory is represented using a Be’zier curve and reduce the unknownvariables from all feature positions to curve control points. This reduced model alsoachieves strong stabilization because the smoothed feature positions are interpolatedfrom the control points. We show the details of our technique in the following subsections.
  24. 24. DELAUNAY TRIANGULATION• In mathematics and computational geometry, a Delaunay triangulation for a set P ofpoints in a plane is a triangulation DT(P) such that no point in P is insidethe circumcircle of any triangle in DT(P).• In geometry, the circumscribed circle or circumcircle of a polygon is a circle whichpasses through all the vertices of the polygon..
  25. 25. SPATIAL RIGIDITY PRESERVATION• spatial rigidity is retained when stabilizing a video in order to preserve neighboring featuretrajectories to have similar treatments.• Specifically, we compute the neighbor relations between features in each frame using theDelaunay triangulation and enforce each triangle to undergo a rigid transformation. Thatis, triangles are allowed to move and rotate but their sizes and shapes should be retained.• This constraint works well in most videos.
  26. 26. OBJECTIVE FUNCTION• we search for the control points of Bezier curves that can minimize the objective function.
  27. 27. RESULTS
  28. 28. LIMITATIONS• Although the algorithm is robust to all challenging examples, the stabilization is noteffective if there are no background features in some frames.
  29. 29. MOSAICING• Mosaicing is the process of compositing or piecing together successive frames of thestabilized image sequence so as to virtually increase the field of view of the camera.• Mosaics are commonly defined only for scenes viewed by a pan/tilt camera, for which theimages can be related by a projective transformation.
  30. 30. REFERENCES• Al Bovik. The Essential Guide to Video Processing• Wang, Y.S., et al., Spatially and Temporally Optimized Video Stabilization. IEEEtransactions on visualization and computer graphics, 2013.• http://www.ics.uci.edu/~eppstein/gina/delaunay.html• http://en.wikipedia.org/wiki/Delaunay_triangulation• http://www.math.ubc.ca/~cass/gfx/bezier.html• http://en.wikipedia.org/wiki/B%C3%A9zier_curve• http://people.cs.nctu.edu.tw/~yushuen/VideoStabilization/
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×