SlideShare a Scribd company logo
1 of 27
Anisotropic Partial Differential Equation
based Video Saliency Detection
Vartika Sharma, Vembarasan Vaitheeswaran, Chee Seng Chan
Original LESD Model
Our Contributions
• First, we propose a novel method to generate static saliency map based on the adaptive nonlinear PDE
model. It is based on the Linear Elliptic System with Dirichlet boundary (LESD) model for image saliency
detection.
• We refine this model for the purpose of video saliency detection because the original LESD model does not
consider the orientation and motion information contained in the video.
• Further, the proposed algorithm was tested on MSRA and Berkeley datasets, where images are mostly
noiseless and are nearer to the image center but most of the video datasets contains heavy noise and the
salient object is usually moving within the frames. For this reason, we do not use center-prior which is given
in the original LESD model but instead, an extensive direction map consisting of background prior, color
prior, texture and luminance features are used.
• We then combine the static map with motion map, which consists of motion features extracted from the
motion vectors of predicted frames, to get the final saliency map. Figure 1 shows the pipeline of our model.
Addition of Non-Linear Matric Tensor
• The diffusion PDE seen previously does not give reliable information
in the presence of flow-like structures (e.g. fingerprints).
• We will extend our model for flow like structure where it would be
required to rotate the PDE flow towards the orientation of interesting
features.
Addition of Non-Linear Matric Tensor
K2
Feature Extraction From DCT Coefficients
• Three features including luminance, color and texture are extracted
from the unpredicted (I-frames) using DCT Coefficients
• On a given video frame, DCT operates on one 8X8 block at a time. On
this block, there are 64-elements or 64-coeffients and the DCT
operates on this block in a left to right and top to down manner (zig-
zag sequencing).
Feature Extraction From DCT Coefficients
• The results of a 64-element DCT transform are 1 DC coefficient and 63
AC coefficients.
• The DC coefficient represents the average color of the 8x8 region.
(Color and Luminance Prior)
• The 63 AC coefficients represent color change across the
block.(Texture)
Motion Feature Extraction from Motion
Vectors
• Motion Vector: A two-dimensional vector used for inter prediction
that provides an offset from the coordinates in the decoded picture to
the coordinates in a reference picture.
• There are two types of predicted frames: P frames use motion
compensated prediction from a past reference frame, while B frames
are bidirectionally predictive-coded by using motion compensated
prediction from a past and/or a future reference frame.
Motion Feature Extraction from Motion
Vectors
• As there is just one prediction direction (predicted from a past
reference frame) for P frames, the original motion vector MV are used
to represent the motion feature for P frames.
• As B frames might include two types of motion compensated
prediction (the backward and forward prediction), we calculate the
motion vectors for B frames
Anisotropic Partial Differential Equation based Video Saliency
Detection
Vartika Sharma, Vembarasan Vaitheeswaran, Chee Seng Chan
Result of our Video Saliency Detection model on KTH Action
Dataset
Results on KTH Action Datasetϯ
Number of action classes = 6
{boxing, hand clapping, hand waving, jogging, running, walking}
Boxing Hand Clapping Hand Waving Jogging Running Walking
Original Action Videos*
Final Saliency Maps
* For convenience, I have chosen only 16 frames per video
Ϯ "Recognizing Human Actions: A Local SVMApproach",Christian Schuldt, Ivan Laptev and Barbara Caputo; in Proc.
ICPR'04, Cambridge, UK.
DC AC01 AC02 AC03 AC04 AC05 AC06 AC07
AC20 AC21 AC22 AC23 AC24 AC25 AC26 AC27
AC10 AC11 AC12 AC13 AC14 AC15 AC16 AC17
AC30 AC31 AC32 AC33 AC34 AC35 AC36 AC37
AC50 AC51 AC52 AC53 AC54 AC55 AC56 AC57
AC40 AC41 AC42 AC43 AC44 AC45 AC46 AC47
AC60 AC61 AC62 AC63 AC64 AC65 AC66 AC67
AC70 AC71 AC72 AC73 AC74 AC75 AC76 AC77
(a)
(c)
(b)
(d)
• We performed salient region segmentation using MCMC segmentation method which
was proposed by Barbu etal ~cite{Barbu2012} for crowd counting. The main purpose of
our experiment is to get an estimation of crowd in a particular video frame and also, to
calculate the rate with which the crowd count is changing in the consecutive frames.
Although, now CCTV cameras are becoming very common for video surveillance, there
are very few algorithms available for real-time automated crowd counting. It is important
to note here that our focus is more on the rate of change of crowd count rather than the
actual crowd count of every frame. A sudden increase or decrease in a crowd count can
act as a warning sign of an unusual activity such as explosion, fight or some other
emergency. For our experiment, we calculate the standard deviation of crowd count in
consecutive video frames for every 10 seconds as a risk calculator. We train our algorithm
on 2000 video frames provided in the Mall Dataset cite{Loy2013} to set the threshold
limit of the standard deviation for which the rate of change of crowd count is ‘safe’. We
further test our algorithm on a few videos in the Pedestrian Traffic Database to show our
results. Figure shows our result on Mall database for crowd counting.
• begin{figure}begin{center}%fbox{rule{0pt}{2in}
rule{0.9linewidth}{0pt}} includegraphics[height=0.95linewidth,
width=0.95linewidth]{final2.png}end{center} caption{Crowd
counting result on frames of Mall Dataset. $(a)$ is the Original video
frames, $(b)$ is our Saliency detection results and $(c)$ is the
segmentaion based on MCMC
method.}label{fig:final2}%label{fig:onecol}end{figure}
CIOFM
MRS
SURPRISE
CA
Our Model
Thank You

More Related Content

What's hot

画像局所特徴量と特定物体認識 - SIFTと最近のアプローチ -
画像局所特徴量と特定物体認識 - SIFTと最近のアプローチ -画像局所特徴量と特定物体認識 - SIFTと最近のアプローチ -
画像局所特徴量と特定物体認識 - SIFTと最近のアプローチ -
MPRG_Chubu_University
 
ダンスをマスターした自身の映像を先に見ることによるダンス学習支援
ダンスをマスターした自身の映像を先に見ることによるダンス学習支援ダンスをマスターした自身の映像を先に見ることによるダンス学習支援
ダンスをマスターした自身の映像を先に見ることによるダンス学習支援
Shuhei Tsuchida
 
Means End Analysis (MEA) in Artificial.pptx
Means End Analysis (MEA) in Artificial.pptxMeans End Analysis (MEA) in Artificial.pptx
Means End Analysis (MEA) in Artificial.pptx
suchita74
 

What's hot (20)

MediaPipeの紹介
MediaPipeの紹介MediaPipeの紹介
MediaPipeの紹介
 
Image similarity with deep learning
Image similarity with deep learningImage similarity with deep learning
Image similarity with deep learning
 
CNNの誤差逆伝播/Deconvolutionの計算過程
CNNの誤差逆伝播/Deconvolutionの計算過程CNNの誤差逆伝播/Deconvolutionの計算過程
CNNの誤差逆伝播/Deconvolutionの計算過程
 
OpenCV Introduction
OpenCV IntroductionOpenCV Introduction
OpenCV Introduction
 
Human Emotion Recognition
Human Emotion RecognitionHuman Emotion Recognition
Human Emotion Recognition
 
画像局所特徴量と特定物体認識 - SIFTと最近のアプローチ -
画像局所特徴量と特定物体認識 - SIFTと最近のアプローチ -画像局所特徴量と特定物体認識 - SIFTと最近のアプローチ -
画像局所特徴量と特定物体認識 - SIFTと最近のアプローチ -
 
Adaptive unsharp masking
Adaptive unsharp maskingAdaptive unsharp masking
Adaptive unsharp masking
 
EMF - Beyond The Basics
EMF - Beyond The BasicsEMF - Beyond The Basics
EMF - Beyond The Basics
 
Top 16 Applications of Computer Vision in Video Surveillance and Security.pdf
Top 16 Applications of Computer Vision in Video Surveillance and Security.pdfTop 16 Applications of Computer Vision in Video Surveillance and Security.pdf
Top 16 Applications of Computer Vision in Video Surveillance and Security.pdf
 
ダンスをマスターした自身の映像を先に見ることによるダンス学習支援
ダンスをマスターした自身の映像を先に見ることによるダンス学習支援ダンスをマスターした自身の映像を先に見ることによるダンス学習支援
ダンスをマスターした自身の映像を先に見ることによるダンス学習支援
 
MLaPP 4章 「ガウシアンモデル」
MLaPP 4章 「ガウシアンモデル」MLaPP 4章 「ガウシアンモデル」
MLaPP 4章 「ガウシアンモデル」
 
{tidygraph}と{ggraph}によるモダンなネットワーク分析
{tidygraph}と{ggraph}によるモダンなネットワーク分析{tidygraph}と{ggraph}によるモダンなネットワーク分析
{tidygraph}と{ggraph}によるモダンなネットワーク分析
 
Means End Analysis (MEA) in Artificial.pptx
Means End Analysis (MEA) in Artificial.pptxMeans End Analysis (MEA) in Artificial.pptx
Means End Analysis (MEA) in Artificial.pptx
 
ウェーブレットと多重解像度処理
ウェーブレットと多重解像度処理ウェーブレットと多重解像度処理
ウェーブレットと多重解像度処理
 
物体検知(Meta Study Group 発表資料)
物体検知(Meta Study Group 発表資料)物体検知(Meta Study Group 発表資料)
物体検知(Meta Study Group 発表資料)
 
Jpeg compression
Jpeg compressionJpeg compression
Jpeg compression
 
Chap6 image restoration
Chap6 image restorationChap6 image restoration
Chap6 image restoration
 
Multi objective optimization and Benchmark functions result
Multi objective optimization and Benchmark functions resultMulti objective optimization and Benchmark functions result
Multi objective optimization and Benchmark functions result
 
08 frequency domain filtering DIP
08 frequency domain filtering DIP08 frequency domain filtering DIP
08 frequency domain filtering DIP
 
[DL輪読会]Diffusion-based Voice Conversion with Fast Maximum Likelihood Samplin...
[DL輪読会]Diffusion-based Voice Conversion with Fast  Maximum Likelihood Samplin...[DL輪読会]Diffusion-based Voice Conversion with Fast  Maximum Likelihood Samplin...
[DL輪読会]Diffusion-based Voice Conversion with Fast Maximum Likelihood Samplin...
 

Viewers also liked

Powerpoint presentation M.A. Thesis Defence
Powerpoint presentation M.A. Thesis DefencePowerpoint presentation M.A. Thesis Defence
Powerpoint presentation M.A. Thesis Defence
Catie Chase
 
Dissertation oral defense presentation
Dissertation   oral defense presentationDissertation   oral defense presentation
Dissertation oral defense presentation
Dr. Naomi Mangatu
 
Thesis Powerpoint
Thesis PowerpointThesis Powerpoint
Thesis Powerpoint
neha47
 
Prepare your Ph.D. Defense Presentation
Prepare your Ph.D. Defense PresentationPrepare your Ph.D. Defense Presentation
Prepare your Ph.D. Defense Presentation
Christian Glahn
 
Thesis Power Point Presentation
Thesis Power Point PresentationThesis Power Point Presentation
Thesis Power Point Presentation
riddhikapandya1985
 

Viewers also liked (19)

Generacion 3.5 g
Generacion 3.5 gGeneracion 3.5 g
Generacion 3.5 g
 
Fame cvpr
Fame cvprFame cvpr
Fame cvpr
 
2008 "An overview of Methods for analysis of Identifiability and Observabilit...
2008 "An overview of Methods for analysis of Identifiability and Observabilit...2008 "An overview of Methods for analysis of Identifiability and Observabilit...
2008 "An overview of Methods for analysis of Identifiability and Observabilit...
 
Prasheel
PrasheelPrasheel
Prasheel
 
Preparing for your viva voce dissertation defence.
Preparing for your viva voce dissertation defence.Preparing for your viva voce dissertation defence.
Preparing for your viva voce dissertation defence.
 
Research Proposal 5 - The Formal Meeting and Presentation
Research Proposal 5 - The Formal Meeting and PresentationResearch Proposal 5 - The Formal Meeting and Presentation
Research Proposal 5 - The Formal Meeting and Presentation
 
Thesis Defense Presentation
Thesis Defense PresentationThesis Defense Presentation
Thesis Defense Presentation
 
Tips on how to defend your thesis
Tips on how to defend your thesisTips on how to defend your thesis
Tips on how to defend your thesis
 
Public speaking & oral presentation
Public speaking & oral presentationPublic speaking & oral presentation
Public speaking & oral presentation
 
Thesis powerpoint
Thesis powerpointThesis powerpoint
Thesis powerpoint
 
Proposal Defense Power Point
Proposal Defense Power PointProposal Defense Power Point
Proposal Defense Power Point
 
My Thesis Defense Presentation
My Thesis Defense PresentationMy Thesis Defense Presentation
My Thesis Defense Presentation
 
Powerpoint presentation M.A. Thesis Defence
Powerpoint presentation M.A. Thesis DefencePowerpoint presentation M.A. Thesis Defence
Powerpoint presentation M.A. Thesis Defence
 
Dissertation oral defense presentation
Dissertation   oral defense presentationDissertation   oral defense presentation
Dissertation oral defense presentation
 
CVPR 2016 速報
CVPR 2016 速報CVPR 2016 速報
CVPR 2016 速報
 
How to Defend your Thesis Proposal like a Professional
How to Defend your Thesis Proposal like a ProfessionalHow to Defend your Thesis Proposal like a Professional
How to Defend your Thesis Proposal like a Professional
 
Thesis Powerpoint
Thesis PowerpointThesis Powerpoint
Thesis Powerpoint
 
Prepare your Ph.D. Defense Presentation
Prepare your Ph.D. Defense PresentationPrepare your Ph.D. Defense Presentation
Prepare your Ph.D. Defense Presentation
 
Thesis Power Point Presentation
Thesis Power Point PresentationThesis Power Point Presentation
Thesis Power Point Presentation
 

Similar to CVPR presentation

final year ieee pojects in pondicherry,bulk ieee projects ,bulk 2015-16 i...
  final  year ieee pojects in pondicherry,bulk ieee projects ,bulk  2015-16 i...  final  year ieee pojects in pondicherry,bulk ieee projects ,bulk  2015-16 i...
final year ieee pojects in pondicherry,bulk ieee projects ,bulk 2015-16 i...
nexgentech
 
A Novel Approaches For Chromatic Squander Less Visceral Coding Techniques Usi...
A Novel Approaches For Chromatic Squander Less Visceral Coding Techniques Usi...A Novel Approaches For Chromatic Squander Less Visceral Coding Techniques Usi...
A Novel Approaches For Chromatic Squander Less Visceral Coding Techniques Usi...
IJERA Editor
 
A Novel Approaches For Chromatic Squander Less Visceral Coding Techniques Usi...
A Novel Approaches For Chromatic Squander Less Visceral Coding Techniques Usi...A Novel Approaches For Chromatic Squander Less Visceral Coding Techniques Usi...
A Novel Approaches For Chromatic Squander Less Visceral Coding Techniques Usi...
IJERA Editor
 
Robust techniques for background subtraction in urban
Robust techniques for background subtraction in urbanRobust techniques for background subtraction in urban
Robust techniques for background subtraction in urban
taylor_1313
 

Similar to CVPR presentation (20)

Text Detection and Recognition in Natural Images
Text Detection and Recognition in Natural ImagesText Detection and Recognition in Natural Images
Text Detection and Recognition in Natural Images
 
Video Annotation for Visual Tracking via Selection and Refinement_tran.pptx
Video Annotation for Visual Tracking via Selection and Refinement_tran.pptxVideo Annotation for Visual Tracking via Selection and Refinement_tran.pptx
Video Annotation for Visual Tracking via Selection and Refinement_tran.pptx
 
C04841417
C04841417C04841417
C04841417
 
Review On Different Feature Extraction Algorithms
Review On Different Feature Extraction AlgorithmsReview On Different Feature Extraction Algorithms
Review On Different Feature Extraction Algorithms
 
Automatic License Plate Detection in Foggy Condition using Enhanced OTSU Tech...
Automatic License Plate Detection in Foggy Condition using Enhanced OTSU Tech...Automatic License Plate Detection in Foggy Condition using Enhanced OTSU Tech...
Automatic License Plate Detection in Foggy Condition using Enhanced OTSU Tech...
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
Silhouette analysis based action recognition via exploiting human poses
Silhouette analysis based action recognition via exploiting human posesSilhouette analysis based action recognition via exploiting human poses
Silhouette analysis based action recognition via exploiting human poses
 
Action_recognition-topic.pptx
Action_recognition-topic.pptxAction_recognition-topic.pptx
Action_recognition-topic.pptx
 
MTP paper
MTP paperMTP paper
MTP paper
 
Image processing
Image processingImage processing
Image processing
 
Trajectory Based Unusual Human Movement Identification for ATM System
	 Trajectory Based Unusual Human Movement Identification for ATM System	 Trajectory Based Unusual Human Movement Identification for ATM System
Trajectory Based Unusual Human Movement Identification for ATM System
 
Recognition and tracking moving objects using moving camera in complex scenes
Recognition and tracking moving objects using moving camera in complex scenesRecognition and tracking moving objects using moving camera in complex scenes
Recognition and tracking moving objects using moving camera in complex scenes
 
BULK IEEE PROJECTS IN MATLAB ,BULK IEEE PROJECTS, IEEE 2015-16 MATLAB PROJEC...
 BULK IEEE PROJECTS IN MATLAB ,BULK IEEE PROJECTS, IEEE 2015-16 MATLAB PROJEC... BULK IEEE PROJECTS IN MATLAB ,BULK IEEE PROJECTS, IEEE 2015-16 MATLAB PROJEC...
BULK IEEE PROJECTS IN MATLAB ,BULK IEEE PROJECTS, IEEE 2015-16 MATLAB PROJEC...
 
final year ieee pojects in pondicherry,bulk ieee projects ,bulk 2015-16 i...
  final  year ieee pojects in pondicherry,bulk ieee projects ,bulk  2015-16 i...  final  year ieee pojects in pondicherry,bulk ieee projects ,bulk  2015-16 i...
final year ieee pojects in pondicherry,bulk ieee projects ,bulk 2015-16 i...
 
1.blind image quality assessment.pptx
1.blind image quality assessment.pptx1.blind image quality assessment.pptx
1.blind image quality assessment.pptx
 
LANE DETECTION USING IMAGE PROCESSING IN PYTHON
LANE DETECTION USING IMAGE PROCESSING IN PYTHONLANE DETECTION USING IMAGE PROCESSING IN PYTHON
LANE DETECTION USING IMAGE PROCESSING IN PYTHON
 
A Novel Approaches For Chromatic Squander Less Visceral Coding Techniques Usi...
A Novel Approaches For Chromatic Squander Less Visceral Coding Techniques Usi...A Novel Approaches For Chromatic Squander Less Visceral Coding Techniques Usi...
A Novel Approaches For Chromatic Squander Less Visceral Coding Techniques Usi...
 
A Novel Approaches For Chromatic Squander Less Visceral Coding Techniques Usi...
A Novel Approaches For Chromatic Squander Less Visceral Coding Techniques Usi...A Novel Approaches For Chromatic Squander Less Visceral Coding Techniques Usi...
A Novel Approaches For Chromatic Squander Less Visceral Coding Techniques Usi...
 
Flow Trajectory Approach for Human Action Recognition
Flow Trajectory Approach for Human Action RecognitionFlow Trajectory Approach for Human Action Recognition
Flow Trajectory Approach for Human Action Recognition
 
Robust techniques for background subtraction in urban
Robust techniques for background subtraction in urbanRobust techniques for background subtraction in urban
Robust techniques for background subtraction in urban
 

CVPR presentation

  • 1. Anisotropic Partial Differential Equation based Video Saliency Detection Vartika Sharma, Vembarasan Vaitheeswaran, Chee Seng Chan
  • 3. Our Contributions • First, we propose a novel method to generate static saliency map based on the adaptive nonlinear PDE model. It is based on the Linear Elliptic System with Dirichlet boundary (LESD) model for image saliency detection. • We refine this model for the purpose of video saliency detection because the original LESD model does not consider the orientation and motion information contained in the video. • Further, the proposed algorithm was tested on MSRA and Berkeley datasets, where images are mostly noiseless and are nearer to the image center but most of the video datasets contains heavy noise and the salient object is usually moving within the frames. For this reason, we do not use center-prior which is given in the original LESD model but instead, an extensive direction map consisting of background prior, color prior, texture and luminance features are used. • We then combine the static map with motion map, which consists of motion features extracted from the motion vectors of predicted frames, to get the final saliency map. Figure 1 shows the pipeline of our model.
  • 4.
  • 5. Addition of Non-Linear Matric Tensor • The diffusion PDE seen previously does not give reliable information in the presence of flow-like structures (e.g. fingerprints). • We will extend our model for flow like structure where it would be required to rotate the PDE flow towards the orientation of interesting features.
  • 6. Addition of Non-Linear Matric Tensor K2
  • 7. Feature Extraction From DCT Coefficients • Three features including luminance, color and texture are extracted from the unpredicted (I-frames) using DCT Coefficients • On a given video frame, DCT operates on one 8X8 block at a time. On this block, there are 64-elements or 64-coeffients and the DCT operates on this block in a left to right and top to down manner (zig- zag sequencing).
  • 8. Feature Extraction From DCT Coefficients • The results of a 64-element DCT transform are 1 DC coefficient and 63 AC coefficients. • The DC coefficient represents the average color of the 8x8 region. (Color and Luminance Prior) • The 63 AC coefficients represent color change across the block.(Texture)
  • 9. Motion Feature Extraction from Motion Vectors • Motion Vector: A two-dimensional vector used for inter prediction that provides an offset from the coordinates in the decoded picture to the coordinates in a reference picture. • There are two types of predicted frames: P frames use motion compensated prediction from a past reference frame, while B frames are bidirectionally predictive-coded by using motion compensated prediction from a past and/or a future reference frame.
  • 10. Motion Feature Extraction from Motion Vectors • As there is just one prediction direction (predicted from a past reference frame) for P frames, the original motion vector MV are used to represent the motion feature for P frames. • As B frames might include two types of motion compensated prediction (the backward and forward prediction), we calculate the motion vectors for B frames
  • 11. Anisotropic Partial Differential Equation based Video Saliency Detection Vartika Sharma, Vembarasan Vaitheeswaran, Chee Seng Chan Result of our Video Saliency Detection model on KTH Action Dataset
  • 12. Results on KTH Action Datasetϯ Number of action classes = 6 {boxing, hand clapping, hand waving, jogging, running, walking} Boxing Hand Clapping Hand Waving Jogging Running Walking Original Action Videos* Final Saliency Maps * For convenience, I have chosen only 16 frames per video Ϯ "Recognizing Human Actions: A Local SVMApproach",Christian Schuldt, Ivan Laptev and Barbara Caputo; in Proc. ICPR'04, Cambridge, UK.
  • 13.
  • 14. DC AC01 AC02 AC03 AC04 AC05 AC06 AC07 AC20 AC21 AC22 AC23 AC24 AC25 AC26 AC27 AC10 AC11 AC12 AC13 AC14 AC15 AC16 AC17 AC30 AC31 AC32 AC33 AC34 AC35 AC36 AC37 AC50 AC51 AC52 AC53 AC54 AC55 AC56 AC57 AC40 AC41 AC42 AC43 AC44 AC45 AC46 AC47 AC60 AC61 AC62 AC63 AC64 AC65 AC66 AC67 AC70 AC71 AC72 AC73 AC74 AC75 AC76 AC77
  • 16.
  • 17. • We performed salient region segmentation using MCMC segmentation method which was proposed by Barbu etal ~cite{Barbu2012} for crowd counting. The main purpose of our experiment is to get an estimation of crowd in a particular video frame and also, to calculate the rate with which the crowd count is changing in the consecutive frames. Although, now CCTV cameras are becoming very common for video surveillance, there are very few algorithms available for real-time automated crowd counting. It is important to note here that our focus is more on the rate of change of crowd count rather than the actual crowd count of every frame. A sudden increase or decrease in a crowd count can act as a warning sign of an unusual activity such as explosion, fight or some other emergency. For our experiment, we calculate the standard deviation of crowd count in consecutive video frames for every 10 seconds as a risk calculator. We train our algorithm on 2000 video frames provided in the Mall Dataset cite{Loy2013} to set the threshold limit of the standard deviation for which the rate of change of crowd count is ‘safe’. We further test our algorithm on a few videos in the Pedestrian Traffic Database to show our results. Figure shows our result on Mall database for crowd counting.
  • 18.
  • 19.
  • 20. • begin{figure}begin{center}%fbox{rule{0pt}{2in} rule{0.9linewidth}{0pt}} includegraphics[height=0.95linewidth, width=0.95linewidth]{final2.png}end{center} caption{Crowd counting result on frames of Mall Dataset. $(a)$ is the Original video frames, $(b)$ is our Saliency detection results and $(c)$ is the segmentaion based on MCMC method.}label{fig:final2}%label{fig:onecol}end{figure}
  • 21.
  • 22.
  • 23.
  • 24.
  • 26.