Your SlideShare is downloading. ×
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
2D/Multi-view Segmentation and Tracking
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

2D/Multi-view Segmentation and Tracking

1,400

Published on

My seminar at VISNET-II Summer School, June 15-19, 2009

My seminar at VISNET-II Summer School, June 15-19, 2009

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,400
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. 2D/Multi-view Segmentation and Tracking Prof. Dr. Touradj Ebrahimi Multimedia Signal Processing Group Ecole Polytechnique Fédérale de Lausanne (EPFL)
  • 2. Outline
    • Introduction
    • 2D segmentation
    • 2D/3D segmentation and tracking
    • VISNET-II multiview tracking
    • Unusual events detection based on 2D segmentation and tracking
    • Final words
  • 3. Introduction
    • 1D segmentation:
      • Shot detection by temporal segmentation
    • 2D segmentation:
      • Spatial segmentation
    • 3D segmentation/tracking
      • Spatial-temporal segmentation
  • 4. Applications
    • Object-based video coding (MPEG-4)
    • Interactive Multimedia
      • Video editing
      • Hyper video
    • Video Surveillance
      • detect people in restricted areas
      • detect suspicious behavior
    • Video/Image Analysis
      • extract information from the video
      • autonomous driving cars
      • medical area: determine size of tumors
    • Man-Machine Interface
    • Content Indexing, Annotation, Search and Retrieval, …
  • 5. Segmentation
    • Definition:
      • Image segmentation refers to the partition of an image into multiple regions according to some criterion
    • Objective:
      • what is where?
    • Segmentation problem can be very difficult and might require the use of domain knowledge
  • 6. Regions and objects
    • Two basic concepts:
    application dependent Semantically meaningful: selection depends on the application Objects Homogeneous according to given criteria (color, motion, texture...): automatically extracted and tracked. Regions
  • 7. Mathematical formulation
    • Segmentation subdivides an image R into N disjoint regions:
    • To each region, we assign a label represented by a gray level or a color
  • 8. Segmentation techniques
    • Techniques according to the dominant features they use:
      • Global knowledge (e.g. thresholding)
      • Edge-based segmentation
      • Region-based segmentation
  • 9. Histogram shape analysis
    • Objects have approximately the same gray/color value that differs from the background gray value:
      • The resulting histogram is bi-modal
      • Threshold: gray value that has minimum histogram value between the two maxima
    T Observed Background Object
  • 10. Edge-based segmentation techniques
    • Edge-based segmentation techniques differ in strategies to construct borders and in the amount of prior information:
      • Edge relaxation
      • Border detection as graph searching
      • Border detection as dynamic programming
      • Hough transform
      • (Geodesic) Snakes
  • 11. Region-based segmentation
    • Homogeneity is used as the main criterion in region-based segmentation
    • Criteria for homogeneity can be based on: gray-level, color, texture, shape, etc.
    • Constructed regions must further satisfy the following conditions:
      • : Each region should be homogeneous
      • :The homogeneity criterion should not be true after merging a region with any adjacent region
  • 12. Region-based segmentation
    • Several approaches
      • Region merging
      • Region splitting
      • Splitting and merging
      • Region growing
      • Watershed
      • ...
  • 13. Splitting and merging
    • At any step, apply the following procedure:
      • Split into four disjoint quadrants any region R i for which H(R i ) = FALSE
      • Merge any adjacent regions R j and R k for which P(R j  R k )= TRUE
      • Stop when no further merging or splitting is possible
  • 14. The multiple feature approach
    • An example of a more advanced region-based segmentation algorithm
    • The approach takes into account several features: spatial (color, texture, position) and temporal (motion)
    • Algorithm that performs the clustering of the pixels into homogeneous regions:
      • Fuzzy C-Means
  • 15. The multiple feature approach
    • Use of multiple features:
      • a vector of features for each pixel (“ feature vector ”)
      • exploit coherence and redundancies among features at the pixel level
    Texture Motion (v x , v y ) Color (Y,U,V,R,G,B…) Position (x,y) image
  • 16. The multiple feature approach
    • What do regions look like in the feature space?
    motion color texture R 1 R 2 R 3  1  2  3
  • 17. Fuzzy C-Means
    • The Fuzzy C-Means algorithm:
      • Minimize the objective function:
        • U : membership matrix
        • membership of pixel to cluster i
        • centroid of cluster i
        • distance between pixel and centroid
    Fuzzy exponent
  • 18. Fuzzy C-Means Stability? Initialize membership matrix U Update centroids: minimize objective function J(U,  with constant U Update memberships: minimize objective function J(U,  with constant  begin end Algorithm: yes no
  • 19. Tracking
    • Semantic level
      • Identify objects from background (temporal discontinuities)
      • Provide a mask defining the areas containing moving objects
      • Use domain knowledge (face, persons, …)
    • Region level
      • Extracts spatial-temporal homogeneous regions
  • 20. Multilevel Region-object Tracking Procedure
    • Object partition validation
      • Initializing the tracking process
      • Decomposing each object into non-overlapping regions
    • Data association
      • Validating the tracking through region descriptor correspondence
  • 21. 2D Tracking
      • Object and regions extraction and tracking
  • 22. Example of 2D segmentation and tracking
    • Based on A. Cavallaro, O. Steiger, and T. Ebrahimi, “Tracking Video Objects in Cluttered Background”, IEEE Trans. on Circuit and Systems for Video Technology, 2005
      • Foreground object extraction
      • Object Partitioning
      • Extraction of Region Descriptors
      • Region Tracking based on Descriptors
      • Object Tracking through a top-down and a bottom-up interaction between region and object levels
  • 23. 2D segmentation and tracking
  • 24. Typical results of Multilevel Region-object tracking
  • 25. Typical results of Multilevel Region-object tracking
  • 26. Multi-view Tracking
    • Geometry-Based methods
      • Object correspondence in different views based on Homography transformation
    • Color-Based methods
      • Object correspondence in different views based on matching the color of different regions
    • Hybrid methods
      • Mix information about the geometry and the visual appearance
  • 27. Overview of a multi-view tracking system developed in VISNET-II
  • 28. Consistent Object Labeling
      • Assign the same label to objects through time and across camera views
  • 29. Object Consistency Verification
      • Stability of objects through time
    O i (n+1) O i (n) R i,j (n) R i,j (n+1)
  • 30. Objects Correspondence
    • Assumptions
      • Cameras are calibrated
      • Moving objects are constrained to move along a dominant ground plane
    • Given at least four corresponding points between two views, the Homography transform can be estimated
  • 31. Objects Correspondence View 1 View 2 Homography transform of View 1 to View 2
  • 32. Transfer Error
    • Error between correspondent objects and their expected projection according to the Homography transform
      • If TE < T , the pair x and x’ are considered as a potential match
      • Create a list of potential matches
  • 33. Correspondence Verification
    • Find objects correspondence, handle splitting and occlusion across views
      • Region descriptors: gravity center, histogram, texture
      • One-to-one correspondence (based on transfer error)
      • Each object receives the same label
      • One-to-many correspondence (based on transfer error)
      • Homography transform of each region from one view to another is computed
      • The distance between regions of two views is computed
      • Minimum mean square error is applied to find the best match of regions between two views
  • 34. Results
  • 35. Results
  • 36.
      • Unusual events
      • Small group of events that deviat e from the normal behavior
      • Rare - compared to usual events
      • Unpredictable - not considered in advance
      • Large interest in automatic and smart video-based surveillance system which would not require human intervention
      • Trajectory based event for which velocity ratio between normal and unusual events is similar to the ratio in the training sequence
    Motivation for Unusual Events Detection
  • 37.
      • Examples
      • Vehicle driving on the wrong side of a road
      • Person running in an area where one expects people walk
      • Careless driving
      • Applications – Video surveillance
      • Parkings
      • Metro or bus stations
    Motivation for Unusual Events Detection
      • Banks and airport lobbies
      • Shopping malls
  • 38.
      • Three specific problems
      • Modeling trajectories with reduced dimensionality
        • PCA, ICA, HMM, …
      • Distance measure between trajectories
        • Euclidian, Hausdorff distance, Longest Common Subsequence, ...
      • Trajectory clustering
        • graph cuts, k-medoids, spectral clustering, mean-shift clustering, ...
      • Different features for unusual event detection
      • Trajectory based scene analysis ( coordinates of the object)
      • Frame based features (number of objects, size, color histograms, …)
    State-of-the-art
  • 39.
      • Testing phase can be performed on sequences from the same scene as for the training phase, or from different scenes
    System Overview
  • 40.
      • Trajectory representation
      • Pre-processing techniques
      • Hole filling: Bresenham’s line-drawing algorithm
      • Smoothing: Savitzky-Golay filter (4th order of polynomial fit, 21 points of window length)
      • Scaling to achieve velocity and acceleration invariance
    Technical Approach
  • 41.
      • High-level robust features
      • Velocity
      • Acceleration
      • Differentiation performed by Savitzky-Golay filter
      • Re-sampling the smoothed trajectory at 128 spatially equidistant points
      • Feature vector extraction
    Technical Approach (cont.)
  • 42.
      • Training - Support Vector Machine (SVM) classifier
      • Input: feature vectors
      • class labels: +1 = normal event
      • – 1 = unusual event
      • Scaling data:
      • Linear kernel
      • Cross validation is used to identify good parameters of the hyperplane
      • Output: SVM model represented by support vectors
    Technical Approach (cont.)
  • 43.
      • Testing
      • Trajectories extraction and pre-processing
      • Feature extraction,
      • Cross scaling:
      • Off-line classification
      • Support Vector Machine model
      • Probability distribution
        • “ unusual” class:
        • normal class:
    Technical Approach (cont.)
  • 44.
      • Standard video sequences from PETS dataset
      • S1: PETS2001, 26 trajectories, 768×576, 25Hz
      • S2: PETS2001, 24 trajectories, 768×576, 25Hz
      • S3: PETS2006, 41 trajectories, 720×576, 25Hz
      • S4: PETS2006, 49 trajectories, 720×576, 25Hz
    Experiments and Results
      • S1
      • S2
      • S3
      • S4
  • 45.
      • Main goal : show that it is possible to train the system with one or more sequences and use the resulting model for testing with other sequences (different scenes and scenarios)
    Experiments and Results (cont.)
  • 46.
      • Four test cases
    Experiments and Results (cont.) Testing sequence Video duration (Number of frames) Number of unusual trajectories (Number of trajectories) Average unusual trajectory length in number of frames (Average trajectory length) Training sequence Unusual events detection rate False alarms S1 4 min 25 sec (6642) 5 (26) 135 (477) S1 (2-fold cross-validation) 2/2 none S2 3 min 49 sec (5752) 2 (24) 218 (450) S1 2/2 none S3 1 min 50 sec (2551) 8 (41) 112 (222) S1 8/8 2 S4 1 min 42 sec (2556) 8 (49) 80 (164) S2 7/8 4
  • 47.
      • When objects are far away from the camera, it is not possible to accurately determinate their velocity since we do not use projection on the ground-plane
      • In order to recognize, detect and analyze behaviors of other objects (e.g., metro), it is necessary to investigate usage of other features, such as
      • size,
      • dominant color,
      • texture of the object, etc.
    Challenges
  • 48.
    • A complete 2D and multiview segmentation and tracking developed for generic applications within VISNET-II NoE resulting in very good performance
    • An unusual event detection added to the above for video surveillance applications with competitive results
    Conclusions
  • 49. Thanks for your attention ! Questions, discussions, … Acknowledgements goes to my past and present PhD students who have contributed and continue to contribute to this work: Andrea Cavallaro, Olivier Steiger, Emrullah Durucan, Yousri Abdeljaoued, Ivan Ivanov, as well as Gelareh Mohammadi (research assistant in 2008)

×