26.motion and feature based person tracking

1,839 views

Published on

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,839
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
103
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

26.motion and feature based person tracking

  1. 1. 2011 transactions oncomputer vision Motion and Feature Based Person Tracking In Surveillance VideosAbstract --This work describes a method for accurately tracking is to find correspondences of the same physicaltracking persons in indoor surveillance video stream obtained objects in different frames. Some of the relevant works infrom a static camera with difficult scene properties including the field of motion detection and tracking is mentioned inillumination changes and solves the major occlusion problem. the following section. This paper is organized as follows.First, moving objects are precisely extracted by determining Section 2 describes the related methods available. Sectionits motion, for further processing. The scene illuminationchanges are averaged to obtain the accurate moving object 3 briefly describes the proposed methodology. Section 4during background subtraction process. In case of objects deals with the experimental results and problems. Section 5occlusion, we use the color feature information to accurately includes the conclusion and future enhancement.distinguish between objects. The method is able to identifymoving persons, track them and provide unique tag for the 2. RELEVANT WORKtracked persons. The effectiveness of the proposed method isdemonstrated with experiments in an indoor environment. We survey the techniques and method relevant to object tracking, specifically approaches that performKeywords – video surveillance, Motion detection, object feature based tracking and handle occlusions. For accuratetracking tracking, the motion must be accurately detected using 1. INTRODUCTION suitable methods, but they are affected by a number of practical problems such as shadow and lighting change Moving Objects Detection and tracking are widely over time.used low-level tasks in many computer vision applications, Many researchers have given their contributions tolike surveillance, monitoring, robot technology, gesture Motion based object detection and tracking under bothrecognition, object recognition etc. Many approaches have indoor and outdoor scenes and provide solutions to thebeen proposed for moving object detection and tracking above mentioned problems.from videos, mainly dedicated to traffic monitoring and R. Cucchiara et al. [1] proposed Sakbot system whichvisual surveillance. is a robust and efficient detection technique based on Although the exact requirements vary between statistical and knowledge-based and use HSV colorsurveillance systems, there are issues that are common to information for shadow suppression. This method isall. Usually, an operator is interested only in certain objects capable to deal with luminance condition changes. Thein the scene. For instance, in surveillance of a public area, mixture of Gaussians [2] is a popular and promisingone may be interested only in monitoring the people within technique to estimate illumination changes and smallit rather than the entire scene movement in the background. In general motion detection algorithms are classified Tracking process consists of establishing thebroadly into two main categories: feature based and optical correspondence between consecutive frames using pixels,flow based. Our approach is feature based. Detection of points, lines or blobs. In the early generation, C. Wren etmoving objects in video streams is the first stage in any al. [3] proposed Pfinder method which tracks the singleautomated video surveillance. Aside from the intrinsic entire human body in the scene without occlusion. Thisusefulness of being able to segment video streams into method modeled pixel color disparity using multivariatemoving and background components, detecting moving Gaussian.blobs provides a focus of attention for recognition, In [4], color segmentation and a non-parametricclassification, and activity analysis, making these later approach are used for detecting contours of movingprocesses more efficient since only "foreground" pixels objects. Tang Sze Ling et al, [5] proposed a method thatneed be considered. uses color information to differentiate between objects and Tracking aims to describe trajectories of moving handles occlusion. S. J. McKenna et al. [6] then proposed aobjects during time. The main problem to solve for method to track people through mutual occlusions as they 978-1-4244-7926-9/11/$26.00 ©2011 IEEE 605
  2. 2. form groups and separate from one another using color 1. Motion detectioninformation. In [7] I. Haritaoglu et al, employ histogrambased approach to locate human body part as head, hands, The most basic form of motion detection is thefeet and torso, then uses head information to find the method of subtracting know background image containingnumber of people. no objects from an image under test. There are several A. J. Lipton et al. [8], using shape and color methods to background subtraction, including averaginginformation to detect and track multiple people and background frames over time and statistical modeling ofvehicles in a crowded scene and monitor activities over a each pixel. Preprocessing based on mean filtering is donelarge area and extended periods of time. To survive in on the input video (i.e., image sequences) to equalize theocclusion conditions, one should take advantage of light illumination changes and also to suppress themultiple cues, like color, motion, edge, etc., as none of presence of shadows.these features alone can provide universal result todifferent environments. The color histogram is robust A. Background subtractionagainst the partial occlusion, but sensitive to theillumination changes in the scene. Preprocessing is done on the video frames to reduce In [9] color cues are combined with motion and cues the presence of noise. We apply mean filter which in turnto provide a better result. Color and shape cues are also blurs the image frames which helps in shadow removal.used in [10], where shape is described using a After preprocessing motion detection is performed.parameterized rectangle or ellipse. In [11] the color, shape The background subtraction method is the commonand edge cues are combined under a particle filter method of motion detection. It is a technique that uses theframework to provide robust tracking result, it also difference of the current image and the background imageinvolves an adoption scheme to select most effective cues to detect the motion region. Its calculation is simple andin different conditions. easy to implement. Background subtraction delineates the foreground from background in the images. 3. PROPOSED METHODOLOGY Our algorithm aims to assign consistent identifier to | , – , | , (1)each object appears in scene when individual merge into orsplit from the group and involves several methods to obtainthe lowest possibility of false tracking and tagging. Intracking interested object (human), shadows affect the where Dk(x,y) is the resultant difference, Fk(x,y) isperformance of tracking and leads to false tagging. To the current frame and Bk-1(x,y) is the backgroundavoid this problem, we apply mean filter to remove noise initialized frame and T is the threshold which suppresswhich causes the image sequence to blur. Since we are shadow depending on the value assigned.using color information for tracking, blurring causes noloss of data. The structural design of our proposed methodshown in Fig.1 Fig. 2 Background Subtraction (a) Background image initialization (b) Current frame with Moving objects. (c) Resultant background subtracted image There are many ways to initial background image. For example, with the first frame as the background directly, or the average pixel brightness of the first few Fig. 1 System architecture frames as the background or using a background image 606
  3. 3. sequences without the prospect of moving objects toestimate the background model parameters and so ondepending on the application. Among these we prefer theimage sequence having no objects as background imagesince we use indoor videos (has less illumination change).Following figure 2 illustrates the result of backgroundsubtraction. The drastic changes in pixel’s intensity indicate thatthe pixel is in motion. The background subtraction stepgenerates a binary image containing black (representsbackground) and white (moving pixels). Then a post processing step is applied on thebinary image to label groups motion pixels as motion blobsusing connected component analysis. The key idea ofconnected component analysis is to attach the adjacentforeground’s pixel (i.e. white pixels) in order to construct aregion. Connected component labeling is used in computervision to detect connected regions in binary digital images.Blobs may be counted, filtered, and tracked.2. Object tracking Fig.3 The work flow of color-based motion tracking component Once the object areas are determined in eachframe, the tracking is performed to trace the objects from The third sub-task is, once the averageframe to frame. The color information from each blob is comparison score of the motion block in the current framederived and tracking is performed by matching blob color. is computed, the processor then assigns a tag to the motionTo handle occlusion, each motion blob is blocks in the current frame. The processor tags the motion The key feature of proposed method is the color blocks in the current frame with either a tag similar to thatinformation of each object is extracted cluster-by-cluster. of the previous frames or a new tag. The decision to retainEach cluster has its own weightage for comparison. The a tag or assign a new tag is dependent on the averagecolor information is extracted from the motion blocks in comparison score computed for the motion block in thethe current frame to categorize matching color information current frame and all motion blocks in the previous frame.between motion blocks in the current frame and previousframes. Subsequently, a tag is assigned to the motion If( comparison score > threshold)blocks in the current frame. Assign previous tag The first sub-task in object tracking is, each Elsemotion block in the current frame is segmented into areas Assign new tagof almost similar color as clusters (head, torso and feet).For each cluster of the motion block, color information is 4. TESTS ON PETS DATASETthen derived as HSV values and stored, which helps incomparison. The above algorithm is implemented using Matlab on The second sub-task is, to identify matching color Windows 7 platform and tested with 4GB RAM. The testinformation between motion blocks in the current frame video for this example is in the PETS-ECCV2004 –and motion blocks in the previous frames. This is done by CAVIAR database, which is an open database for researchcomparing the cluster color information of a cluster of the on visual surveillance.motion block in the current frame with the cluster colorinformation of clusters in all motion blocks in the previous A. Motion detectionframes using weighted matching. For each comparisonmade, the processor computes a respective comparison Accuracy in motion detection is important for efficientscore. The comparison score for each of the clusters of the tracking. The threshold should be set in such a way tomotion block in the current frame is stored. The processor avoid shadow to a greater extent also the blob size shouldthen identifies the highest comparison score of each cluster be maintained properly and it depends on the application.in the current frame. This is repeated for every cluster of The figure 4 shown below illustrates the results withthe motion block in the current frame. various threshold values. 607
  4. 4. Fig 5 Occlusion handling Fig 4 Motion Detection 5. CONCLUSION B. Object tracking The advantages of using color as feature to Assigning a suitable tag accurately during achieve object’s similarity is analyzed and found that it isocclusion condition is illustrated below. Color feature robust against the complex, deformed and changeableextraction and matching provides good solution in shape (i.e. different human profiles). In addition, it is alsoassigning tags and clustering helps in reducing the cost of scale and rotation invariant, as well as faster in terms ofcomparison. The following fig 5 shows handling processing time. Color information is extracted, stored andocclusions during tracking. compared to find uniqueness of each object. 608
  5. 5. REFERENCES [6] S. J. McKenna, S. Jabri, Z. Duric, A. Rosenfeld and H. Wechsler, “Tracking group of people,” Comput. Vis. Image[1] R. Cucchiara, C. Grana, G. Neri, M. Piccardi and A. Understanding, vol. 80, no. 1, pp. 42-56, 2000.Prati, “The Sakbot system for moving object detection and [7] I. Haritaoglu, D. Harwood, and L. S. Davis, “W4: Real-tracking,” Video-based Surveillance Systems-Computer time surveillance of people and their activities,” In IEEEvision and Distributed Processing, pp. 145-157, 2001. Trans. Pattern Analysis and Machine Intelligent, vol. 22,[2] C. Stauffer and W. E. L. Grimson, “Adaptive no. 8, 2000, pp. 809-830.background mixture models for real-time tracking,” in [8] A. Lipton, H. Fujiyoshi and R. Patil, “Moving targetProc. IEEE Conf. Computer Vision and Pattern classification and tracking from real-time video,” InRecognition, 1999. DARPA Image Understanding Workshop, pp. 129-136,[3] C. Wren, A. Azarbayejani, T. Darrell, A. Pentl, November 1998.“Pfinder: Real-time tracking of the human body,” In IEEE [9] P. Pérez, J. Vermaak, and A. Blake, "Data fusion forTrans. Pattern Analysis and tracking with particles," Proceedings of the IEEE, vol. 92,Machine Intelligent, vol. 19, no. 7, pp. 780-785. no. 3, pp. 495-513, (2004).[4] L. Qiu and L. Li, “Contour extraction of moving [10] C. Shen, A. van den Hengel, and A. Dick,objects,” in Proc. IEEE Int’l Conf. Pattern Recognition, "Probabilistic multiple cue integration for particle filtervol. 2, 1998, pp. 1427–1432. based tracking," in Proc. of the VIIth Digital Image[5] Tang Sze Ling, Liang Kim Meng, Lim Mei Kuan, Computing: Techniques and Applications. C. Sun, H.Zulaikha Kadim and Ahmed A. Baha‘a Al-Deen, “Colour- Talbot, S. Ourselin, T. Adriansen, Eds., 10-12, (2003).based Object Tracking in Surveillance Application” in [11] Wang, H., et al., "Adaptive object tracking based onProceedings of the International MultiConference of an effective appearance filter". IEEE Transactions onEngineers and Computer Scientists 2009 Vol I IMECS Pattern Analysis and Machine Intelligence,(2007).2009, March 18 - 20, 2009, Hong Kong. 609

×