SlideShare a Scribd company logo
Extracting the Object from the Shadows: Maximum
Likelihood Object/Shadow Discrimination
Kan Ouivirach and Matthew N. Dailey
Computer Science and Information Management
Asian Institute of Technology
May 16, 2013
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 1 / 37
Outline
1 Introduction
2 Proposed Method
3 Experimental Results
4 Conclusion and Future Work
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 2 / 37
Introduction
Overview
Detecting and tracking moving objects is an important issue.
Background subtraction is a very common approach to detect moving
objects.
One major disadvantage is that shadows tend to be misclassified as part of
the foreground object.
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 3 / 37
Introduction
Problems
The size of the detected object could be overestimated.
This leads to merging otherwise separate blobs representing different
people walking close to each other.
This makes isolating and tracking people in a group much more
difficult than necessary.
(a) (b)
Figure : Example problem with shadows. (a) Original image. (b) Foreground
pixels according to background model.
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 4 / 37
Introduction
Why Important?
Shadow removal can significantly improve the performance of computer
vision tasks such as
tracking;
segmentation;
object detection.
Shadow detection has become an active research area in recent years.
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 5 / 37
Introduction
Related Work
Chromaticity and luminance are not orthogonal in the RGB color space,
but lighting differences can be controlled for in the normalized RGB color
space.
Miki´c et al. (2000) observe that in the normalized RGB color space,
shadow pixels tend to be more blue and less red than illuminated pixels.
They apply a probabilistic model based on the normalized red and blue
features to classify shadow pixels in traffic scenes.
One well-known problem with the normalized RGB space is that
normalization of pixels with low intensity results in unstable chromatic
components (Kender; 1976).
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 6 / 37
Introduction
Related Work
Cucchiara et al. (2001) and Chen et al. (2008) use a HSV color-based
method (deterministic nonmodel-based method) to eliminate the concerns
based on the assumption that only intensity of the shadow area will
significantly change.
SPt(x, y) =



1 if α ≤ IV
t (x,y)
BV
t (x,y)
≤ β
∧(IS
t (x, y) − BS
t (x, y)) ≤ TS
∧ IH
t (x, y) − BH
t (x, y) ≤ TH
0 otherwise,
where SPt(x, y) is the resulting binary mask for shadows at each pixel
(x, y) at time t. IH
t , IS
t , IV
t , BH
t , BS
t , and BV
t are the H, S, and V
components of foreground pixel It(x, y) and background pixel Bt(x, y) at
pixel (x, y) at time t, respectively. They prevent foreground pixels from
being classified as shadow pixels by setting two thresholds, 0 < α < β < 1.
The four thresholds α, β, TS , and TH are empirically determined.
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 7 / 37
Introduction
Related Work
Some researchers have investigated color spaces besides RGB and HSV.
Blauensteiner et al. (2006) use an “improved” hue, luminance, and
saturation (IHLS) color space for shadow detection to deal with the issue
of unstable hue at low saturation by modeling the relationship between
them.
Another alternative color space is YUV. Some applications such as
television and videoconferencing use the YUV color space natively, and
since transformation from YUV to HSV is time-consuming,
Schreer et al. (2002) operate in the YUV color space directly, developing a
fast shadow detection algorithm based on approximated changes of hue
and saturation in the YUV color space.
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 8 / 37
Introduction
Related Work
Some work uses texture-based methods such as the normalized
cross-correlation (NCC) technique. This method detects shadows based on
the assumption that the intensity of shadows is proportional to the
incident light, so shadow pixels should simply be darker than the
corresponding background pixels.
However, the texture-based method tends to misclassify foreground pixels
as shadow pixels when the foreground region has a similar texture to the
corresponding background region.
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 9 / 37
Outline
1 Introduction
2 Proposed Method
3 Experimental Results
4 Conclusion and Future Work
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 10 / 37
Proposed Method
Overview
We propose a new method for detecting shadows using maximum
likelihood estimation based on color information.
We extend the deterministic nonmodel-based approach to statistical
model-based approach.
We estimate the joint distribution over the difference in the HSV
color space between pixels in the current frame and the corresponding
pixels in a background model, conditional on whether the pixel is an
object pixel or a shadow pixel.
We use the maximum likelihood principle, at run time, to classify each
foreground pixel as either shadow or object given the estimated model.
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 11 / 37
Proposed Method
Overview: Offline and Online Phases
We divide our method into two phases:
1 Offline phase:
Construct a background model from the first few frames;
Extract foreground on the remaining frames;
Manually label the extracted pixels as either object or shadow pixels;
Construct a joint probability model over the difference in the HSV color
space between pixels in the current frame and the corresponding pixels
in the background model, conditional on whether the pixel is an object
pixel or a shadow pixel.
2 Online phase:
Perform the same background modeling and foreground extraction
procedure;
Classify foreground pixels as either shadow or object using the
maximum likelihood approach.
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 12 / 37
Proposed Method
Processing Flow
Our processing flow is as follows.
1 Global Motion Detection
2 Foreground Extraction
3 Maximum Likelihood Classification of Foreground Pixels (Offline and
Online Phases)
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 13 / 37
Proposed Method
Global Motion Detection
We use simple frame-by-frame subtraction.
When the difference image has a number of pixels whose intensity
differences are above a threshold, we start a new video segment.
No motion frames are removed to decrease processing and storage time.
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 14 / 37
Proposed Method
Foreground Extraction
After discarding the no-motion video segments, we use Poppe et al. (2007)
background modeling method to segment foreground pixels from the
background.
(a) (b)
Figure : Example foreground extraction. (a) Original image. (b) Foreground
pixels according to background model.
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 15 / 37
Proposed Method
Maximum Likelihood Classification of Foreground Pixels
In the offline phase:
After foreground extraction, we manually label pixels as either shadow
or object.
We then observe the distribution over the difference in hue (Hdiff),
saturation (Sdiff), and value (Vdiff) components in the HSV color
space between pixels in the current frame and the corresponding
pixels in the background model.
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 16 / 37
Proposed Method
Distributions over the Differences in HSV Components for True Object Pixels
(a) (b) (c)
Figure : Example distributions over the difference in (a) hue, (b) saturation, and
(c) value components for true object pixels, extracted from our hallway dataset.
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 17 / 37
Proposed Method
Distributions over the Differences in HSV Components for Shadow Pixels
(a) (b) (c)
Figure : Example distributions over the difference in (a) hue, (b) saturation, and
(c) value components for shadow pixels, extracted from our hallway dataset.
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 18 / 37
Proposed Method
Measurement Likelihood for Shadow Pixels
We define the measurement likelihood for pixel (x, y) given its assignment
as follows.
P(Mxy | Axy = sh) = P(Hdiff | Axy = sh)
P(Sdiff | Axy = sh)
P(Vdiff | Axy = sh),
where Mxy is a tuple containing the HSV value for pixel (x, y) in the
current image as well as the HSV value for pixel (x, y) in the background
model for pixel (x, y), and Axy is the assignment of pixel (x, y) as object
or shadow. “sh” stands for shadow.
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 19 / 37
Proposed Method
Measurement Likelihood for Shadow Pixels
To make the problem tractable, we assume that the distributions over the
components on the right hand side in the previous equation follow
Gaussian distributions, defined as follows.
P(Hdiff | Axy = sh) = N(Hdiff; µhsh
diff
, σ2
hsh
diff
)
P(Sdiff | Axy = sh) = N(Sdiff; µssh
diff
, σ2
ssh
diff
)
P(Vdiff | Axy = sh) = N(Vdiff; µvsh
diff
, σ2
vsh
diff
)
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 20 / 37
Proposed Method
Measurement Likelihood for Object Pixels
Similarly, the measurement likelihood for object pixels can be computed as
follows.
P(Mxy | Axy = obj) = P(Hdiff | Axy = obj)
P(Sdiff | Axy = obj)
P(Vdiff | Axy = obj)
Here “obj” stands for object.
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 21 / 37
Proposed Method
Measurement Likelihood for Object Pixels
As for the shadow pixel distributions, we assume Gaussian distributions
over the components on the right hand side in the previous equation, as
follows.
P(Hdiff | Axy = obj) = N(Hdiff; µhobj
diff
, σ2
hobj
diff
)
P(Sdiff | Axy = obj) = N(Sdiff; µsobj
diff
, σ2
sobj
diff
)
P(Vdiff | Axy = obj) = N(Vdiff; µvobj
diff
, σ2
vobj
diff
)
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 22 / 37
Proposed Method
Model Parameters
We estimate the parameters
Θ = {µhsh
diff
, σ2
hsh
diff
, µssh
diff
, σ2
ssh
diff
, µvsh
diff
, σ2
vsh
diff
, µhobj
diff
, σ2
hobj
diff
, µsobj
diff
, σ2
sobj
diff
, µvobj
diff
, σ2
vobj
diff
}
directly from training data during the offline phase.
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 23 / 37
Proposed Method
Pixel Classification
In the online phase:
Given the model estimate Θ, we use the maximum likelihood approach to
classify a pixel as a shadow pixel if
P(Mxy | Axy = sh; Θ) > P(Mxy | Axy = obj; Θ).
Otherwise, we classify the pixel as an object pixel.
We could add the prior probabilities to the shadow model and the object
model in the equation above to obtain a maximum a posteriori classifier.
In our experiments, we assume equal priors.
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 24 / 37
Outline
1 Introduction
2 Proposed Method
3 Experimental Results
4 Conclusion and Future Work
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 25 / 37
Experimental Results
Overview
We present the experimental results for
1 our proposed maximum likelihood (ML) classification method;
2 the deterministic nonmodel-based (DNM) method;
3 the normalized cross-correlation (NCC) method.
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 26 / 37
Experimental Results
Video Sequences
We performed the experiments on three video sequences. The figure below
shows sample frames from the three video sequences.
(a) (b) (c)
Figure : Sample frames from the (a) Hallway, (b) Laboratory, and (c) Highway
video sequences
The Hallway sequence is our own dataset. The Laboratory and Highway
sequences were first introduced in Prati et al. (2003).
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 27 / 37
Experimental Results
Performance Evaluation
We compute the two metrics proposed by Prati et al. (2003), defining the
shadow detection rate η and the shadow discrimination rate ξ as follows:
η =
TPsh
TPsh + FNsh
; ξ =
TPobj
TPobj + FNobj
,
where the subscript “sh” and “obj” stand for shadow and object,
respectively. TP and FN are the number of true positive (i.e., the shadow
or object pixels correctly identified) and false negative (i.e., the shadow or
object pixels classified incorrectly) pixels.
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 28 / 37
Experimental Results
Metrics for Evaluation
More information about η and ξ
η: the proportion of shadow pixels correctly detected
ξ: the proportion of object pixels correctly detected.
The η and ξ can also be thought of as the true positive rate (sensitivity)
and true negative rate (specificity) for detecting shadows, respectively.
In the experiment, we also compare the methods with the additional two
metrics: precision and F1 score.
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 29 / 37
Experimental Results
Preparation for the Experiments
Ground truth data:
The Laboratory and Highway video sequences are provided in Sanin et
al. (2012).
A standard Gaussian mixture (GMM) background model in used to
extract foreground pixels.
For our Hallway video sequence, we used the previously mentioned
extended version of the GMM background model for foreground
extraction, but the results were not substantially different from those
of the standard GMM.
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 30 / 37
Experimental Results
Find the Best Parameters for Each Model
We performed five-fold cross validation for each of the three models and
each of the three data sets.
We varied the parameter settings for each method on each video dataset
and selected the setting that maximized the F1 score (a measure
combining both precision and recall) over the cross validation test sets.
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 31 / 37
Experimental Results
Shadow Detection Performance
Table : Comparison of shadow detection results between the proposed, DNM, and
NCC methods.
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 32 / 37
Experimental Results
Shadow Detection Performance
From the table, our proposed method
achieves the top performance for shadow detection rate η and F1
score in every case;
obtains good shadow discrimination rate ξ in every case.
The DNM method has stable performance for all three videos, with good
performance for all metrics.
Both the DNM method and our proposed method suffer from the problem
that the object colors can be confused with the background color. We
clearly see this situation in the Highway sequence (third row in the next
figure).
The NCC method achieves the best shadow discrimination rate ξ and
precision because it classifies nearly every pixel as object as can be seen in
the next figure.
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 33 / 37
Experimental Results
Shadow Detection Performance
Figure : Results for an arbitrary frame in each video sequence. The first column
contains an example original frame for each video sequence. The second column
shows the ground truth for that frame, where object pixels are labeled in white
and shadow pixels are labeled in gray. The remaining columns show shadow
detection results for each method, where pixels labeled as object shown in green
and pixels labeled as shadow are shown in red.
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 34 / 37
Outline
1 Introduction
2 Proposed Method
3 Experimental Results
4 Conclusion and Future Work
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 35 / 37
Conclusion and Future Work
Conclusion
We propose a new method for detecting shadows using a simple
maximum likelihood approach based on color information;
We extend the deterministic nonmodel-based approach, designing a
parametric statistical model-based approach;
Our experimental results show that our proposed method is extremely
effective and superior to the standard methods on three different
real-world video surveillance data sets.
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 36 / 37
Conclusion and Future Work
Future Work
In some cases, we misdetect shadow pixels due to similar color
between the object and the background and unclear background
texture in shadow regions;
Incorporating geometric or shadow region shape priors would
potentially improve the detection and discrimination rates;
We plan to address these issues, further explore the feasibility of
combining our method with other useful shadow features, and
integrate our shadow detection module with a real-world open source
video surveillance system.
kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 37 / 37

More Related Content

What's hot

From Sense to Print: Towards Automatic 3D Printing from 3D Sensing Devices
From Sense to Print: Towards Automatic 3D Printing from 3D Sensing DevicesFrom Sense to Print: Towards Automatic 3D Printing from 3D Sensing Devices
From Sense to Print: Towards Automatic 3D Printing from 3D Sensing Devicestoukaigi
 
MLIP - Chapter 5 - Detection, Segmentation, Captioning
MLIP - Chapter 5 - Detection, Segmentation, CaptioningMLIP - Chapter 5 - Detection, Segmentation, Captioning
MLIP - Chapter 5 - Detection, Segmentation, CaptioningCharles Deledalle
 
VEHICLE RECOGNITION USING VIBE AND SVM
VEHICLE RECOGNITION USING VIBE AND SVM VEHICLE RECOGNITION USING VIBE AND SVM
VEHICLE RECOGNITION USING VIBE AND SVM cseij
 
MLIP - Chapter 3 - Introduction to deep learning
MLIP - Chapter 3 - Introduction to deep learningMLIP - Chapter 3 - Introduction to deep learning
MLIP - Chapter 3 - Introduction to deep learningCharles Deledalle
 
IMPACT OF ERROR FILTERS ON SHARES IN HALFTONE VISUAL CRYPTOGRAPHY
IMPACT OF ERROR FILTERS ON SHARES IN HALFTONE VISUAL CRYPTOGRAPHYIMPACT OF ERROR FILTERS ON SHARES IN HALFTONE VISUAL CRYPTOGRAPHY
IMPACT OF ERROR FILTERS ON SHARES IN HALFTONE VISUAL CRYPTOGRAPHYcscpconf
 
A short and naive introduction to using network in prediction models
A short and naive introduction to using network in prediction modelsA short and naive introduction to using network in prediction models
A short and naive introduction to using network in prediction modelstuxette
 
A comparatively study on visual cryptography
A comparatively study on visual cryptographyA comparatively study on visual cryptography
A comparatively study on visual cryptographyeSAT Journals
 
Visual cryptography scheme for color images
Visual cryptography scheme for color imagesVisual cryptography scheme for color images
Visual cryptography scheme for color imagesiaemedu
 
Image Processing 1
Image Processing 1Image Processing 1
Image Processing 1jainatin
 
Visual Attention Convergence Index for Virtual Reality Experiences
Visual Attention Convergence Index for Virtual Reality ExperiencesVisual Attention Convergence Index for Virtual Reality Experiences
Visual Attention Convergence Index for Virtual Reality ExperiencesPawel Kobylinski
 
Visual Cryptography in Meaningful Shares
Visual Cryptography in Meaningful SharesVisual Cryptography in Meaningful Shares
Visual Cryptography in Meaningful SharesDebarko De
 
A NEW VISUAL CRYPTOGRAPHY TECHNIQUE FOR COLOR IMAGES
A NEW VISUAL CRYPTOGRAPHY TECHNIQUE FOR COLOR IMAGESA NEW VISUAL CRYPTOGRAPHY TECHNIQUE FOR COLOR IMAGES
A NEW VISUAL CRYPTOGRAPHY TECHNIQUE FOR COLOR IMAGESIJTET Journal
 
Stereo Correspondence Estimation by Two Dimensional Real Time Spiral Search A...
Stereo Correspondence Estimation by Two Dimensional Real Time Spiral Search A...Stereo Correspondence Estimation by Two Dimensional Real Time Spiral Search A...
Stereo Correspondence Estimation by Two Dimensional Real Time Spiral Search A...MDABDULMANNANMONDAL
 
Visual cryptography scheme for color images
Visual cryptography scheme for color imagesVisual cryptography scheme for color images
Visual cryptography scheme for color imagesIAEME Publication
 
Deep Learning behind Prisma
Deep Learning behind PrismaDeep Learning behind Prisma
Deep Learning behind Prismalostleaves
 

What's hot (18)

From Sense to Print: Towards Automatic 3D Printing from 3D Sensing Devices
From Sense to Print: Towards Automatic 3D Printing from 3D Sensing DevicesFrom Sense to Print: Towards Automatic 3D Printing from 3D Sensing Devices
From Sense to Print: Towards Automatic 3D Printing from 3D Sensing Devices
 
MLIP - Chapter 5 - Detection, Segmentation, Captioning
MLIP - Chapter 5 - Detection, Segmentation, CaptioningMLIP - Chapter 5 - Detection, Segmentation, Captioning
MLIP - Chapter 5 - Detection, Segmentation, Captioning
 
291A_report_Hannah-Deepa-YunSuk
291A_report_Hannah-Deepa-YunSuk291A_report_Hannah-Deepa-YunSuk
291A_report_Hannah-Deepa-YunSuk
 
T01022103108
T01022103108T01022103108
T01022103108
 
VEHICLE RECOGNITION USING VIBE AND SVM
VEHICLE RECOGNITION USING VIBE AND SVM VEHICLE RECOGNITION USING VIBE AND SVM
VEHICLE RECOGNITION USING VIBE AND SVM
 
MLIP - Chapter 3 - Introduction to deep learning
MLIP - Chapter 3 - Introduction to deep learningMLIP - Chapter 3 - Introduction to deep learning
MLIP - Chapter 3 - Introduction to deep learning
 
IMPACT OF ERROR FILTERS ON SHARES IN HALFTONE VISUAL CRYPTOGRAPHY
IMPACT OF ERROR FILTERS ON SHARES IN HALFTONE VISUAL CRYPTOGRAPHYIMPACT OF ERROR FILTERS ON SHARES IN HALFTONE VISUAL CRYPTOGRAPHY
IMPACT OF ERROR FILTERS ON SHARES IN HALFTONE VISUAL CRYPTOGRAPHY
 
A short and naive introduction to using network in prediction models
A short and naive introduction to using network in prediction modelsA short and naive introduction to using network in prediction models
A short and naive introduction to using network in prediction models
 
A comparatively study on visual cryptography
A comparatively study on visual cryptographyA comparatively study on visual cryptography
A comparatively study on visual cryptography
 
Visual cryptography scheme for color images
Visual cryptography scheme for color imagesVisual cryptography scheme for color images
Visual cryptography scheme for color images
 
Image Processing 1
Image Processing 1Image Processing 1
Image Processing 1
 
Visual Attention Convergence Index for Virtual Reality Experiences
Visual Attention Convergence Index for Virtual Reality ExperiencesVisual Attention Convergence Index for Virtual Reality Experiences
Visual Attention Convergence Index for Virtual Reality Experiences
 
Visual Cryptography in Meaningful Shares
Visual Cryptography in Meaningful SharesVisual Cryptography in Meaningful Shares
Visual Cryptography in Meaningful Shares
 
A NEW VISUAL CRYPTOGRAPHY TECHNIQUE FOR COLOR IMAGES
A NEW VISUAL CRYPTOGRAPHY TECHNIQUE FOR COLOR IMAGESA NEW VISUAL CRYPTOGRAPHY TECHNIQUE FOR COLOR IMAGES
A NEW VISUAL CRYPTOGRAPHY TECHNIQUE FOR COLOR IMAGES
 
Stereo Correspondence Estimation by Two Dimensional Real Time Spiral Search A...
Stereo Correspondence Estimation by Two Dimensional Real Time Spiral Search A...Stereo Correspondence Estimation by Two Dimensional Real Time Spiral Search A...
Stereo Correspondence Estimation by Two Dimensional Real Time Spiral Search A...
 
Visual cryptography scheme for color images
Visual cryptography scheme for color imagesVisual cryptography scheme for color images
Visual cryptography scheme for color images
 
Graphics
GraphicsGraphics
Graphics
 
Deep Learning behind Prisma
Deep Learning behind PrismaDeep Learning behind Prisma
Deep Learning behind Prisma
 

Similar to Extracting the Object from the Shadows: Maximum Likelihood Object/Shadow Discrimination

OBJECT DETECTION AND RECOGNITION: A SURVEY
OBJECT DETECTION AND RECOGNITION: A SURVEYOBJECT DETECTION AND RECOGNITION: A SURVEY
OBJECT DETECTION AND RECOGNITION: A SURVEYJournal For Research
 
Image Denoising Based On Sparse Representation In A Probabilistic Framework
Image Denoising Based On Sparse Representation In A Probabilistic FrameworkImage Denoising Based On Sparse Representation In A Probabilistic Framework
Image Denoising Based On Sparse Representation In A Probabilistic FrameworkCSCJournals
 
APPEARANCE-BASED REPRESENTATION AND RENDERING OF CAST SHADOWS
APPEARANCE-BASED REPRESENTATION AND RENDERING OF CAST SHADOWSAPPEARANCE-BASED REPRESENTATION AND RENDERING OF CAST SHADOWS
APPEARANCE-BASED REPRESENTATION AND RENDERING OF CAST SHADOWSijcga
 
Synthetic aperture radar images
Synthetic aperture radar imagesSynthetic aperture radar images
Synthetic aperture radar imagessipij
 
OBJECT DETECTION FOR SERVICE ROBOT USING RANGE AND COLOR FEATURES OF AN IMAGE
OBJECT DETECTION FOR SERVICE ROBOT USING RANGE AND COLOR FEATURES OF AN IMAGEOBJECT DETECTION FOR SERVICE ROBOT USING RANGE AND COLOR FEATURES OF AN IMAGE
OBJECT DETECTION FOR SERVICE ROBOT USING RANGE AND COLOR FEATURES OF AN IMAGEIJCSEA Journal
 
Object Detection for Service Robot Using Range and Color Features of an Image
Object Detection for Service Robot Using Range and Color Features of an ImageObject Detection for Service Robot Using Range and Color Features of an Image
Object Detection for Service Robot Using Range and Color Features of an ImageIJCSEA Journal
 
Dj31514517
Dj31514517Dj31514517
Dj31514517IJMER
 
Dj31514517
Dj31514517Dj31514517
Dj31514517IJMER
 
Object detection for service robot using range and color features of an image
Object detection for service robot using range and color features of an imageObject detection for service robot using range and color features of an image
Object detection for service robot using range and color features of an imageIJCSEA Journal
 
GROUPING OBJECTS BASED ON THEIR APPEARANCE
GROUPING OBJECTS BASED ON THEIR APPEARANCEGROUPING OBJECTS BASED ON THEIR APPEARANCE
GROUPING OBJECTS BASED ON THEIR APPEARANCEijaia
 
Rigorous Pack Edge Detection Fuzzy System
Rigorous Pack Edge Detection Fuzzy SystemRigorous Pack Edge Detection Fuzzy System
Rigorous Pack Edge Detection Fuzzy Systeminventy
 
Shadow Detection and Removal in Still Images by using Hue Properties of Color...
Shadow Detection and Removal in Still Images by using Hue Properties of Color...Shadow Detection and Removal in Still Images by using Hue Properties of Color...
Shadow Detection and Removal in Still Images by using Hue Properties of Color...ijsrd.com
 
Detection of Dense, Overlapping, Geometric Objects
Detection of Dense, Overlapping, Geometric Objects Detection of Dense, Overlapping, Geometric Objects
Detection of Dense, Overlapping, Geometric Objects gerogepatton
 
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTS
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTSDETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTS
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTSgerogepatton
 
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTS
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTSDETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTS
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTSijaia
 
Fuzzy Entropy Based Optimal Thresholding Technique for Image Enhancement
Fuzzy Entropy Based Optimal Thresholding Technique for Image Enhancement  Fuzzy Entropy Based Optimal Thresholding Technique for Image Enhancement
Fuzzy Entropy Based Optimal Thresholding Technique for Image Enhancement ijsc
 
Image Restoration for 3D Computer Vision
Image Restoration for 3D Computer VisionImage Restoration for 3D Computer Vision
Image Restoration for 3D Computer VisionPetteriTeikariPhD
 
Colour Object Recognition using Biologically Inspired Model
Colour Object Recognition using Biologically Inspired ModelColour Object Recognition using Biologically Inspired Model
Colour Object Recognition using Biologically Inspired Modelijsrd.com
 
最近の研究情勢についていくために - Deep Learningを中心に -
最近の研究情勢についていくために - Deep Learningを中心に - 最近の研究情勢についていくために - Deep Learningを中心に -
最近の研究情勢についていくために - Deep Learningを中心に - Hiroshi Fukui
 

Similar to Extracting the Object from the Shadows: Maximum Likelihood Object/Shadow Discrimination (20)

OBJECT DETECTION AND RECOGNITION: A SURVEY
OBJECT DETECTION AND RECOGNITION: A SURVEYOBJECT DETECTION AND RECOGNITION: A SURVEY
OBJECT DETECTION AND RECOGNITION: A SURVEY
 
Image Denoising Based On Sparse Representation In A Probabilistic Framework
Image Denoising Based On Sparse Representation In A Probabilistic FrameworkImage Denoising Based On Sparse Representation In A Probabilistic Framework
Image Denoising Based On Sparse Representation In A Probabilistic Framework
 
APPEARANCE-BASED REPRESENTATION AND RENDERING OF CAST SHADOWS
APPEARANCE-BASED REPRESENTATION AND RENDERING OF CAST SHADOWSAPPEARANCE-BASED REPRESENTATION AND RENDERING OF CAST SHADOWS
APPEARANCE-BASED REPRESENTATION AND RENDERING OF CAST SHADOWS
 
Synthetic aperture radar images
Synthetic aperture radar imagesSynthetic aperture radar images
Synthetic aperture radar images
 
OBJECT DETECTION FOR SERVICE ROBOT USING RANGE AND COLOR FEATURES OF AN IMAGE
OBJECT DETECTION FOR SERVICE ROBOT USING RANGE AND COLOR FEATURES OF AN IMAGEOBJECT DETECTION FOR SERVICE ROBOT USING RANGE AND COLOR FEATURES OF AN IMAGE
OBJECT DETECTION FOR SERVICE ROBOT USING RANGE AND COLOR FEATURES OF AN IMAGE
 
Object Detection for Service Robot Using Range and Color Features of an Image
Object Detection for Service Robot Using Range and Color Features of an ImageObject Detection for Service Robot Using Range and Color Features of an Image
Object Detection for Service Robot Using Range and Color Features of an Image
 
Dj31514517
Dj31514517Dj31514517
Dj31514517
 
Dj31514517
Dj31514517Dj31514517
Dj31514517
 
Object detection for service robot using range and color features of an image
Object detection for service robot using range and color features of an imageObject detection for service robot using range and color features of an image
Object detection for service robot using range and color features of an image
 
GROUPING OBJECTS BASED ON THEIR APPEARANCE
GROUPING OBJECTS BASED ON THEIR APPEARANCEGROUPING OBJECTS BASED ON THEIR APPEARANCE
GROUPING OBJECTS BASED ON THEIR APPEARANCE
 
Rigorous Pack Edge Detection Fuzzy System
Rigorous Pack Edge Detection Fuzzy SystemRigorous Pack Edge Detection Fuzzy System
Rigorous Pack Edge Detection Fuzzy System
 
Shadow Detection and Removal in Still Images by using Hue Properties of Color...
Shadow Detection and Removal in Still Images by using Hue Properties of Color...Shadow Detection and Removal in Still Images by using Hue Properties of Color...
Shadow Detection and Removal in Still Images by using Hue Properties of Color...
 
Detection of Dense, Overlapping, Geometric Objects
Detection of Dense, Overlapping, Geometric Objects Detection of Dense, Overlapping, Geometric Objects
Detection of Dense, Overlapping, Geometric Objects
 
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTS
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTSDETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTS
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTS
 
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTS
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTSDETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTS
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTS
 
Log polar coordinates
Log polar coordinatesLog polar coordinates
Log polar coordinates
 
Fuzzy Entropy Based Optimal Thresholding Technique for Image Enhancement
Fuzzy Entropy Based Optimal Thresholding Technique for Image Enhancement  Fuzzy Entropy Based Optimal Thresholding Technique for Image Enhancement
Fuzzy Entropy Based Optimal Thresholding Technique for Image Enhancement
 
Image Restoration for 3D Computer Vision
Image Restoration for 3D Computer VisionImage Restoration for 3D Computer Vision
Image Restoration for 3D Computer Vision
 
Colour Object Recognition using Biologically Inspired Model
Colour Object Recognition using Biologically Inspired ModelColour Object Recognition using Biologically Inspired Model
Colour Object Recognition using Biologically Inspired Model
 
最近の研究情勢についていくために - Deep Learningを中心に -
最近の研究情勢についていくために - Deep Learningを中心に - 最近の研究情勢についていくために - Deep Learningを中心に -
最近の研究情勢についていくために - Deep Learningを中心に -
 

More from Kan Ouivirach, Ph.D.

Adoption of AI: The Great Opportunities for Everyone
Adoption of AI: The Great Opportunities for EveryoneAdoption of AI: The Great Opportunities for Everyone
Adoption of AI: The Great Opportunities for EveryoneKan Ouivirach, Ph.D.
 
Uncover Python's Potential in Machine Learning
Uncover Python's Potential in Machine LearningUncover Python's Potential in Machine Learning
Uncover Python's Potential in Machine LearningKan Ouivirach, Ph.D.
 
WordPress Hooks: The Right Way to Extend Your WordPress
WordPress Hooks: The Right Way to Extend Your WordPressWordPress Hooks: The Right Way to Extend Your WordPress
WordPress Hooks: The Right Way to Extend Your WordPressKan Ouivirach, Ph.D.
 
Lesson Learned from Using Docker Swarm at Pronto
Lesson Learned from Using Docker Swarm at ProntoLesson Learned from Using Docker Swarm at Pronto
Lesson Learned from Using Docker Swarm at ProntoKan Ouivirach, Ph.D.
 
หัดเขียน A.I. แบบ AlphaGo กันชิวๆ
หัดเขียน A.I. แบบ AlphaGo กันชิวๆหัดเขียน A.I. แบบ AlphaGo กันชิวๆ
หัดเขียน A.I. แบบ AlphaGo กันชิวๆKan Ouivirach, Ph.D.
 
What We Do in This Weird Office Culture
What We Do in This Weird Office CultureWhat We Do in This Weird Office Culture
What We Do in This Weird Office CultureKan Ouivirach, Ph.D.
 
Exploring Machine Learning in Python with Scikit-Learn
Exploring Machine Learning in Python with Scikit-LearnExploring Machine Learning in Python with Scikit-Learn
Exploring Machine Learning in Python with Scikit-LearnKan Ouivirach, Ph.D.
 
Thailand Hadoop Big Data Challenge #1
Thailand Hadoop Big Data Challenge #1Thailand Hadoop Big Data Challenge #1
Thailand Hadoop Big Data Challenge #1Kan Ouivirach, Ph.D.
 
Achieving "Zero Downtime Deployment" with Automated Testing
Achieving "Zero Downtime Deployment" with Automated TestingAchieving "Zero Downtime Deployment" with Automated Testing
Achieving "Zero Downtime Deployment" with Automated TestingKan Ouivirach, Ph.D.
 
Practical Experience in Automated Testing at Pronto Marketing
Practical Experience in Automated Testing at Pronto MarketingPractical Experience in Automated Testing at Pronto Marketing
Practical Experience in Automated Testing at Pronto MarketingKan Ouivirach, Ph.D.
 
Clustering Human Behaviors with Dynamic Time Warping and Hidden Markov Models...
Clustering Human Behaviors with Dynamic Time Warping and Hidden Markov Models...Clustering Human Behaviors with Dynamic Time Warping and Hidden Markov Models...
Clustering Human Behaviors with Dynamic Time Warping and Hidden Markov Models...Kan Ouivirach, Ph.D.
 
Adapting Scrum to Managing a Research Group
Adapting Scrum to Managing a Research GroupAdapting Scrum to Managing a Research Group
Adapting Scrum to Managing a Research GroupKan Ouivirach, Ph.D.
 

More from Kan Ouivirach, Ph.D. (19)

Adoption of AI: The Great Opportunities for Everyone
Adoption of AI: The Great Opportunities for EveryoneAdoption of AI: The Great Opportunities for Everyone
Adoption of AI: The Great Opportunities for Everyone
 
Uncover Python's Potential in Machine Learning
Uncover Python's Potential in Machine LearningUncover Python's Potential in Machine Learning
Uncover Python's Potential in Machine Learning
 
WordPress Hooks: The Right Way to Extend Your WordPress
WordPress Hooks: The Right Way to Extend Your WordPressWordPress Hooks: The Right Way to Extend Your WordPress
WordPress Hooks: The Right Way to Extend Your WordPress
 
Lesson Learned from Using Docker Swarm at Pronto
Lesson Learned from Using Docker Swarm at ProntoLesson Learned from Using Docker Swarm at Pronto
Lesson Learned from Using Docker Swarm at Pronto
 
หัดเขียน A.I. แบบ AlphaGo กันชิวๆ
หัดเขียน A.I. แบบ AlphaGo กันชิวๆหัดเขียน A.I. แบบ AlphaGo กันชิวๆ
หัดเขียน A.I. แบบ AlphaGo กันชิวๆ
 
Agile and Scrum Methodology
Agile and Scrum MethodologyAgile and Scrum Methodology
Agile and Scrum Methodology
 
Machine Learning at Geeky Base 2
Machine Learning at Geeky Base 2Machine Learning at Geeky Base 2
Machine Learning at Geeky Base 2
 
Machine Learning คือ? #bcbk
Machine Learning คือ? #bcbkMachine Learning คือ? #bcbk
Machine Learning คือ? #bcbk
 
What We Do in This Weird Office Culture
What We Do in This Weird Office CultureWhat We Do in This Weird Office Culture
What We Do in This Weird Office Culture
 
The WordPress Way
The WordPress WayThe WordPress Way
The WordPress Way
 
Exploring Machine Learning in Python with Scikit-Learn
Exploring Machine Learning in Python with Scikit-LearnExploring Machine Learning in Python with Scikit-Learn
Exploring Machine Learning in Python with Scikit-Learn
 
Machine Learning at Geeky Base
Machine Learning at Geeky BaseMachine Learning at Geeky Base
Machine Learning at Geeky Base
 
Thailand Hadoop Big Data Challenge #1
Thailand Hadoop Big Data Challenge #1Thailand Hadoop Big Data Challenge #1
Thailand Hadoop Big Data Challenge #1
 
Achieving "Zero Downtime Deployment" with Automated Testing
Achieving "Zero Downtime Deployment" with Automated TestingAchieving "Zero Downtime Deployment" with Automated Testing
Achieving "Zero Downtime Deployment" with Automated Testing
 
Pronto R&D Presentation
Pronto R&D PresentationPronto R&D Presentation
Pronto R&D Presentation
 
Scrum at Pronto Marketing
Scrum at Pronto MarketingScrum at Pronto Marketing
Scrum at Pronto Marketing
 
Practical Experience in Automated Testing at Pronto Marketing
Practical Experience in Automated Testing at Pronto MarketingPractical Experience in Automated Testing at Pronto Marketing
Practical Experience in Automated Testing at Pronto Marketing
 
Clustering Human Behaviors with Dynamic Time Warping and Hidden Markov Models...
Clustering Human Behaviors with Dynamic Time Warping and Hidden Markov Models...Clustering Human Behaviors with Dynamic Time Warping and Hidden Markov Models...
Clustering Human Behaviors with Dynamic Time Warping and Hidden Markov Models...
 
Adapting Scrum to Managing a Research Group
Adapting Scrum to Managing a Research GroupAdapting Scrum to Managing a Research Group
Adapting Scrum to Managing a Research Group
 

Recently uploaded

plant breeding methods in asexually or clonally propagated crops
plant breeding methods in asexually or clonally propagated cropsplant breeding methods in asexually or clonally propagated crops
plant breeding methods in asexually or clonally propagated cropsparmarsneha2
 
MARUTI SUZUKI- A Successful Joint Venture in India.pptx
MARUTI SUZUKI- A Successful Joint Venture in India.pptxMARUTI SUZUKI- A Successful Joint Venture in India.pptx
MARUTI SUZUKI- A Successful Joint Venture in India.pptxbennyroshan06
 
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptx
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxStudents, digital devices and success - Andreas Schleicher - 27 May 2024..pptx
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
 
Overview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with MechanismOverview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with MechanismDeeptiGupta154
 
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXXPhrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXXMIRIAMSALINAS13
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
 
1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptxJosvitaDsouza2
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfThiyagu K
 
Cambridge International AS A Level Biology Coursebook - EBook (MaryFosbery J...
Cambridge International AS  A Level Biology Coursebook - EBook (MaryFosbery J...Cambridge International AS  A Level Biology Coursebook - EBook (MaryFosbery J...
Cambridge International AS A Level Biology Coursebook - EBook (MaryFosbery J...AzmatAli747758
 
Basic phrases for greeting and assisting costumers
Basic phrases for greeting and assisting costumersBasic phrases for greeting and assisting costumers
Basic phrases for greeting and assisting costumersPedroFerreira53928
 
Solid waste management & Types of Basic civil Engineering notes by DJ Sir.pptx
Solid waste management & Types of Basic civil Engineering notes by DJ Sir.pptxSolid waste management & Types of Basic civil Engineering notes by DJ Sir.pptx
Solid waste management & Types of Basic civil Engineering notes by DJ Sir.pptxDenish Jangid
 
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaasiemaillard
 
Fish and Chips - have they had their chips
Fish and Chips - have they had their chipsFish and Chips - have they had their chips
Fish and Chips - have they had their chipsGeoBlogs
 
50 ĐỀ LUYỆN THI IOE LỚP 9 - NĂM HỌC 2022-2023 (CÓ LINK HÌNH, FILE AUDIO VÀ ĐÁ...
50 ĐỀ LUYỆN THI IOE LỚP 9 - NĂM HỌC 2022-2023 (CÓ LINK HÌNH, FILE AUDIO VÀ ĐÁ...50 ĐỀ LUYỆN THI IOE LỚP 9 - NĂM HỌC 2022-2023 (CÓ LINK HÌNH, FILE AUDIO VÀ ĐÁ...
50 ĐỀ LUYỆN THI IOE LỚP 9 - NĂM HỌC 2022-2023 (CÓ LINK HÌNH, FILE AUDIO VÀ ĐÁ...Nguyen Thanh Tu Collection
 
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...EugeneSaldivar
 
Digital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and ResearchDigital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and ResearchVikramjit Singh
 
How to Break the cycle of negative Thoughts
How to Break the cycle of negative ThoughtsHow to Break the cycle of negative Thoughts
How to Break the cycle of negative ThoughtsCol Mukteshwar Prasad
 
Basic Civil Engineering Notes of Chapter-6, Topic- Ecosystem, Biodiversity G...
Basic Civil Engineering Notes of Chapter-6,  Topic- Ecosystem, Biodiversity G...Basic Civil Engineering Notes of Chapter-6,  Topic- Ecosystem, Biodiversity G...
Basic Civil Engineering Notes of Chapter-6, Topic- Ecosystem, Biodiversity G...Denish Jangid
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfTamralipta Mahavidyalaya
 
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaasiemaillard
 

Recently uploaded (20)

plant breeding methods in asexually or clonally propagated crops
plant breeding methods in asexually or clonally propagated cropsplant breeding methods in asexually or clonally propagated crops
plant breeding methods in asexually or clonally propagated crops
 
MARUTI SUZUKI- A Successful Joint Venture in India.pptx
MARUTI SUZUKI- A Successful Joint Venture in India.pptxMARUTI SUZUKI- A Successful Joint Venture in India.pptx
MARUTI SUZUKI- A Successful Joint Venture in India.pptx
 
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptx
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxStudents, digital devices and success - Andreas Schleicher - 27 May 2024..pptx
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptx
 
Overview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with MechanismOverview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with Mechanism
 
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXXPhrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
 
1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdf
 
Cambridge International AS A Level Biology Coursebook - EBook (MaryFosbery J...
Cambridge International AS  A Level Biology Coursebook - EBook (MaryFosbery J...Cambridge International AS  A Level Biology Coursebook - EBook (MaryFosbery J...
Cambridge International AS A Level Biology Coursebook - EBook (MaryFosbery J...
 
Basic phrases for greeting and assisting costumers
Basic phrases for greeting and assisting costumersBasic phrases for greeting and assisting costumers
Basic phrases for greeting and assisting costumers
 
Solid waste management & Types of Basic civil Engineering notes by DJ Sir.pptx
Solid waste management & Types of Basic civil Engineering notes by DJ Sir.pptxSolid waste management & Types of Basic civil Engineering notes by DJ Sir.pptx
Solid waste management & Types of Basic civil Engineering notes by DJ Sir.pptx
 
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
 
Fish and Chips - have they had their chips
Fish and Chips - have they had their chipsFish and Chips - have they had their chips
Fish and Chips - have they had their chips
 
50 ĐỀ LUYỆN THI IOE LỚP 9 - NĂM HỌC 2022-2023 (CÓ LINK HÌNH, FILE AUDIO VÀ ĐÁ...
50 ĐỀ LUYỆN THI IOE LỚP 9 - NĂM HỌC 2022-2023 (CÓ LINK HÌNH, FILE AUDIO VÀ ĐÁ...50 ĐỀ LUYỆN THI IOE LỚP 9 - NĂM HỌC 2022-2023 (CÓ LINK HÌNH, FILE AUDIO VÀ ĐÁ...
50 ĐỀ LUYỆN THI IOE LỚP 9 - NĂM HỌC 2022-2023 (CÓ LINK HÌNH, FILE AUDIO VÀ ĐÁ...
 
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
 
Digital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and ResearchDigital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and Research
 
How to Break the cycle of negative Thoughts
How to Break the cycle of negative ThoughtsHow to Break the cycle of negative Thoughts
How to Break the cycle of negative Thoughts
 
Basic Civil Engineering Notes of Chapter-6, Topic- Ecosystem, Biodiversity G...
Basic Civil Engineering Notes of Chapter-6,  Topic- Ecosystem, Biodiversity G...Basic Civil Engineering Notes of Chapter-6,  Topic- Ecosystem, Biodiversity G...
Basic Civil Engineering Notes of Chapter-6, Topic- Ecosystem, Biodiversity G...
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
 
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
 

Extracting the Object from the Shadows: Maximum Likelihood Object/Shadow Discrimination

  • 1. Extracting the Object from the Shadows: Maximum Likelihood Object/Shadow Discrimination Kan Ouivirach and Matthew N. Dailey Computer Science and Information Management Asian Institute of Technology May 16, 2013 kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 1 / 37
  • 2. Outline 1 Introduction 2 Proposed Method 3 Experimental Results 4 Conclusion and Future Work kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 2 / 37
  • 3. Introduction Overview Detecting and tracking moving objects is an important issue. Background subtraction is a very common approach to detect moving objects. One major disadvantage is that shadows tend to be misclassified as part of the foreground object. kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 3 / 37
  • 4. Introduction Problems The size of the detected object could be overestimated. This leads to merging otherwise separate blobs representing different people walking close to each other. This makes isolating and tracking people in a group much more difficult than necessary. (a) (b) Figure : Example problem with shadows. (a) Original image. (b) Foreground pixels according to background model. kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 4 / 37
  • 5. Introduction Why Important? Shadow removal can significantly improve the performance of computer vision tasks such as tracking; segmentation; object detection. Shadow detection has become an active research area in recent years. kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 5 / 37
  • 6. Introduction Related Work Chromaticity and luminance are not orthogonal in the RGB color space, but lighting differences can be controlled for in the normalized RGB color space. Miki´c et al. (2000) observe that in the normalized RGB color space, shadow pixels tend to be more blue and less red than illuminated pixels. They apply a probabilistic model based on the normalized red and blue features to classify shadow pixels in traffic scenes. One well-known problem with the normalized RGB space is that normalization of pixels with low intensity results in unstable chromatic components (Kender; 1976). kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 6 / 37
  • 7. Introduction Related Work Cucchiara et al. (2001) and Chen et al. (2008) use a HSV color-based method (deterministic nonmodel-based method) to eliminate the concerns based on the assumption that only intensity of the shadow area will significantly change. SPt(x, y) =    1 if α ≤ IV t (x,y) BV t (x,y) ≤ β ∧(IS t (x, y) − BS t (x, y)) ≤ TS ∧ IH t (x, y) − BH t (x, y) ≤ TH 0 otherwise, where SPt(x, y) is the resulting binary mask for shadows at each pixel (x, y) at time t. IH t , IS t , IV t , BH t , BS t , and BV t are the H, S, and V components of foreground pixel It(x, y) and background pixel Bt(x, y) at pixel (x, y) at time t, respectively. They prevent foreground pixels from being classified as shadow pixels by setting two thresholds, 0 < α < β < 1. The four thresholds α, β, TS , and TH are empirically determined. kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 7 / 37
  • 8. Introduction Related Work Some researchers have investigated color spaces besides RGB and HSV. Blauensteiner et al. (2006) use an “improved” hue, luminance, and saturation (IHLS) color space for shadow detection to deal with the issue of unstable hue at low saturation by modeling the relationship between them. Another alternative color space is YUV. Some applications such as television and videoconferencing use the YUV color space natively, and since transformation from YUV to HSV is time-consuming, Schreer et al. (2002) operate in the YUV color space directly, developing a fast shadow detection algorithm based on approximated changes of hue and saturation in the YUV color space. kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 8 / 37
  • 9. Introduction Related Work Some work uses texture-based methods such as the normalized cross-correlation (NCC) technique. This method detects shadows based on the assumption that the intensity of shadows is proportional to the incident light, so shadow pixels should simply be darker than the corresponding background pixels. However, the texture-based method tends to misclassify foreground pixels as shadow pixels when the foreground region has a similar texture to the corresponding background region. kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 9 / 37
  • 10. Outline 1 Introduction 2 Proposed Method 3 Experimental Results 4 Conclusion and Future Work kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 10 / 37
  • 11. Proposed Method Overview We propose a new method for detecting shadows using maximum likelihood estimation based on color information. We extend the deterministic nonmodel-based approach to statistical model-based approach. We estimate the joint distribution over the difference in the HSV color space between pixels in the current frame and the corresponding pixels in a background model, conditional on whether the pixel is an object pixel or a shadow pixel. We use the maximum likelihood principle, at run time, to classify each foreground pixel as either shadow or object given the estimated model. kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 11 / 37
  • 12. Proposed Method Overview: Offline and Online Phases We divide our method into two phases: 1 Offline phase: Construct a background model from the first few frames; Extract foreground on the remaining frames; Manually label the extracted pixels as either object or shadow pixels; Construct a joint probability model over the difference in the HSV color space between pixels in the current frame and the corresponding pixels in the background model, conditional on whether the pixel is an object pixel or a shadow pixel. 2 Online phase: Perform the same background modeling and foreground extraction procedure; Classify foreground pixels as either shadow or object using the maximum likelihood approach. kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 12 / 37
  • 13. Proposed Method Processing Flow Our processing flow is as follows. 1 Global Motion Detection 2 Foreground Extraction 3 Maximum Likelihood Classification of Foreground Pixels (Offline and Online Phases) kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 13 / 37
  • 14. Proposed Method Global Motion Detection We use simple frame-by-frame subtraction. When the difference image has a number of pixels whose intensity differences are above a threshold, we start a new video segment. No motion frames are removed to decrease processing and storage time. kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 14 / 37
  • 15. Proposed Method Foreground Extraction After discarding the no-motion video segments, we use Poppe et al. (2007) background modeling method to segment foreground pixels from the background. (a) (b) Figure : Example foreground extraction. (a) Original image. (b) Foreground pixels according to background model. kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 15 / 37
  • 16. Proposed Method Maximum Likelihood Classification of Foreground Pixels In the offline phase: After foreground extraction, we manually label pixels as either shadow or object. We then observe the distribution over the difference in hue (Hdiff), saturation (Sdiff), and value (Vdiff) components in the HSV color space between pixels in the current frame and the corresponding pixels in the background model. kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 16 / 37
  • 17. Proposed Method Distributions over the Differences in HSV Components for True Object Pixels (a) (b) (c) Figure : Example distributions over the difference in (a) hue, (b) saturation, and (c) value components for true object pixels, extracted from our hallway dataset. kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 17 / 37
  • 18. Proposed Method Distributions over the Differences in HSV Components for Shadow Pixels (a) (b) (c) Figure : Example distributions over the difference in (a) hue, (b) saturation, and (c) value components for shadow pixels, extracted from our hallway dataset. kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 18 / 37
  • 19. Proposed Method Measurement Likelihood for Shadow Pixels We define the measurement likelihood for pixel (x, y) given its assignment as follows. P(Mxy | Axy = sh) = P(Hdiff | Axy = sh) P(Sdiff | Axy = sh) P(Vdiff | Axy = sh), where Mxy is a tuple containing the HSV value for pixel (x, y) in the current image as well as the HSV value for pixel (x, y) in the background model for pixel (x, y), and Axy is the assignment of pixel (x, y) as object or shadow. “sh” stands for shadow. kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 19 / 37
  • 20. Proposed Method Measurement Likelihood for Shadow Pixels To make the problem tractable, we assume that the distributions over the components on the right hand side in the previous equation follow Gaussian distributions, defined as follows. P(Hdiff | Axy = sh) = N(Hdiff; µhsh diff , σ2 hsh diff ) P(Sdiff | Axy = sh) = N(Sdiff; µssh diff , σ2 ssh diff ) P(Vdiff | Axy = sh) = N(Vdiff; µvsh diff , σ2 vsh diff ) kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 20 / 37
  • 21. Proposed Method Measurement Likelihood for Object Pixels Similarly, the measurement likelihood for object pixels can be computed as follows. P(Mxy | Axy = obj) = P(Hdiff | Axy = obj) P(Sdiff | Axy = obj) P(Vdiff | Axy = obj) Here “obj” stands for object. kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 21 / 37
  • 22. Proposed Method Measurement Likelihood for Object Pixels As for the shadow pixel distributions, we assume Gaussian distributions over the components on the right hand side in the previous equation, as follows. P(Hdiff | Axy = obj) = N(Hdiff; µhobj diff , σ2 hobj diff ) P(Sdiff | Axy = obj) = N(Sdiff; µsobj diff , σ2 sobj diff ) P(Vdiff | Axy = obj) = N(Vdiff; µvobj diff , σ2 vobj diff ) kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 22 / 37
  • 23. Proposed Method Model Parameters We estimate the parameters Θ = {µhsh diff , σ2 hsh diff , µssh diff , σ2 ssh diff , µvsh diff , σ2 vsh diff , µhobj diff , σ2 hobj diff , µsobj diff , σ2 sobj diff , µvobj diff , σ2 vobj diff } directly from training data during the offline phase. kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 23 / 37
  • 24. Proposed Method Pixel Classification In the online phase: Given the model estimate Θ, we use the maximum likelihood approach to classify a pixel as a shadow pixel if P(Mxy | Axy = sh; Θ) > P(Mxy | Axy = obj; Θ). Otherwise, we classify the pixel as an object pixel. We could add the prior probabilities to the shadow model and the object model in the equation above to obtain a maximum a posteriori classifier. In our experiments, we assume equal priors. kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 24 / 37
  • 25. Outline 1 Introduction 2 Proposed Method 3 Experimental Results 4 Conclusion and Future Work kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 25 / 37
  • 26. Experimental Results Overview We present the experimental results for 1 our proposed maximum likelihood (ML) classification method; 2 the deterministic nonmodel-based (DNM) method; 3 the normalized cross-correlation (NCC) method. kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 26 / 37
  • 27. Experimental Results Video Sequences We performed the experiments on three video sequences. The figure below shows sample frames from the three video sequences. (a) (b) (c) Figure : Sample frames from the (a) Hallway, (b) Laboratory, and (c) Highway video sequences The Hallway sequence is our own dataset. The Laboratory and Highway sequences were first introduced in Prati et al. (2003). kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 27 / 37
  • 28. Experimental Results Performance Evaluation We compute the two metrics proposed by Prati et al. (2003), defining the shadow detection rate η and the shadow discrimination rate ξ as follows: η = TPsh TPsh + FNsh ; ξ = TPobj TPobj + FNobj , where the subscript “sh” and “obj” stand for shadow and object, respectively. TP and FN are the number of true positive (i.e., the shadow or object pixels correctly identified) and false negative (i.e., the shadow or object pixels classified incorrectly) pixels. kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 28 / 37
  • 29. Experimental Results Metrics for Evaluation More information about η and ξ η: the proportion of shadow pixels correctly detected ξ: the proportion of object pixels correctly detected. The η and ξ can also be thought of as the true positive rate (sensitivity) and true negative rate (specificity) for detecting shadows, respectively. In the experiment, we also compare the methods with the additional two metrics: precision and F1 score. kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 29 / 37
  • 30. Experimental Results Preparation for the Experiments Ground truth data: The Laboratory and Highway video sequences are provided in Sanin et al. (2012). A standard Gaussian mixture (GMM) background model in used to extract foreground pixels. For our Hallway video sequence, we used the previously mentioned extended version of the GMM background model for foreground extraction, but the results were not substantially different from those of the standard GMM. kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 30 / 37
  • 31. Experimental Results Find the Best Parameters for Each Model We performed five-fold cross validation for each of the three models and each of the three data sets. We varied the parameter settings for each method on each video dataset and selected the setting that maximized the F1 score (a measure combining both precision and recall) over the cross validation test sets. kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 31 / 37
  • 32. Experimental Results Shadow Detection Performance Table : Comparison of shadow detection results between the proposed, DNM, and NCC methods. kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 32 / 37
  • 33. Experimental Results Shadow Detection Performance From the table, our proposed method achieves the top performance for shadow detection rate η and F1 score in every case; obtains good shadow discrimination rate ξ in every case. The DNM method has stable performance for all three videos, with good performance for all metrics. Both the DNM method and our proposed method suffer from the problem that the object colors can be confused with the background color. We clearly see this situation in the Highway sequence (third row in the next figure). The NCC method achieves the best shadow discrimination rate ξ and precision because it classifies nearly every pixel as object as can be seen in the next figure. kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 33 / 37
  • 34. Experimental Results Shadow Detection Performance Figure : Results for an arbitrary frame in each video sequence. The first column contains an example original frame for each video sequence. The second column shows the ground truth for that frame, where object pixels are labeled in white and shadow pixels are labeled in gray. The remaining columns show shadow detection results for each method, where pixels labeled as object shown in green and pixels labeled as shadow are shown in red. kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 34 / 37
  • 35. Outline 1 Introduction 2 Proposed Method 3 Experimental Results 4 Conclusion and Future Work kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 35 / 37
  • 36. Conclusion and Future Work Conclusion We propose a new method for detecting shadows using a simple maximum likelihood approach based on color information; We extend the deterministic nonmodel-based approach, designing a parametric statistical model-based approach; Our experimental results show that our proposed method is extremely effective and superior to the standard methods on three different real-world video surveillance data sets. kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 36 / 37
  • 37. Conclusion and Future Work Future Work In some cases, we misdetect shadow pixels due to similar color between the object and the background and unclear background texture in shadow regions; Incorporating geometric or shadow region shape priors would potentially improve the detection and discrimination rates; We plan to address these issues, further explore the feasibility of combining our method with other useful shadow features, and integrate our shadow detection module with a real-world open source video surveillance system. kan@ieee.org ML Object/Shadow Discrimination May 16, 2013 37 / 37