Presentation summary:
* Moving object detection by background modeling and subtraction.
* Solved and unsolved challenges.
* Framework for low-rank and sparse decomposition.
* Some applications of RPCA on:
* * Background modeling and foreground separation.
* * Very dynamic background.
* * Multidimensional and streaming data.
* LRSLibrary1 + demo.
第116回音楽情報科学研究会
MMアルゴリズムの説明を追加しました.
English title: Nonnegative Matrix Factorization Based on Complex Laplace Distribution
Authors: H. Tanji, T. Murakami, H. Kamata
Institution: Meiji University
Presented in IPSJ Music and Computer 116th Domestic Workshop, Aug. 2017.
Detail of MM algorithm for Laplace-NMF is added to the presented slide.
Recent advances on low-rank and sparse decomposition for moving object detectionActiveEon
(RFIA 2016) Recent advances on low-rank and sparse decomposition for moving object detection: matrix and tensor-based approaches. RFIA 2016, workshop/atelier: Enjeux dans la détection d’objets mobiles par soustraction de fond.
PhD Thesis Defense Presentation: Robust Low-rank and Sparse Decomposition for...ActiveEon
Thesis submitted by Andrews Cordolino Sobral at Université de La Rochelle to fulfill the degree of Doctor of Philosophy.
Robust Low-rank and Sparse Decomposition for Moving Object Detection - From Matrices to Tensors
第116回音楽情報科学研究会
MMアルゴリズムの説明を追加しました.
English title: Nonnegative Matrix Factorization Based on Complex Laplace Distribution
Authors: H. Tanji, T. Murakami, H. Kamata
Institution: Meiji University
Presented in IPSJ Music and Computer 116th Domestic Workshop, Aug. 2017.
Detail of MM algorithm for Laplace-NMF is added to the presented slide.
Recent advances on low-rank and sparse decomposition for moving object detectionActiveEon
(RFIA 2016) Recent advances on low-rank and sparse decomposition for moving object detection: matrix and tensor-based approaches. RFIA 2016, workshop/atelier: Enjeux dans la détection d’objets mobiles par soustraction de fond.
PhD Thesis Defense Presentation: Robust Low-rank and Sparse Decomposition for...ActiveEon
Thesis submitted by Andrews Cordolino Sobral at Université de La Rochelle to fulfill the degree of Doctor of Philosophy.
Robust Low-rank and Sparse Decomposition for Moving Object Detection - From Matrices to Tensors
Ridge-based Profiled Differential Power AnalysisPriyanka Aash
Ridge-based differential power analysis techniques and side-channel attacks on intermediate states with no partial key guessing are discussed. Topic 1: Ridge-Based Profiled Differential Power Analysis Authors: Weijia Wang, Yu Yu, François-Xavier Standaert, Dawu Gu, Sen Xu and Chi Zhang Topic 2: My Traces Learn What You Did in the Dark: Recovering Secret Signals without Key Guesses Authors: Si Gao, Hua Chen, Wenling Wu, Limin Fan, Weiqiong Cao and Xiangliang Ma.
(Source : RSA Conference USA 2017)
International Journal of Mathematics and Statistics Invention (IJMSI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJMSI publishes research articles and reviews within the whole field Mathematics and Statistics, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
This presentation guide you through Linear Discriminant
Analysis, LDA: Overview, Assumptions of LDA and Prepare the data for LDA.
For more topics stay tuned with Learnbay.
2015-06-15 Large-Scale Elastic-Net Regularized Generalized Linear Models at S...DB Tsai
Nonlinear methods are widely used to produce higher performance compared with linear methods; however, nonlinear methods are generally more expensive in model size, training time, and scoring phase. With proper feature engineering techniques like polynomial expansion, the linear methods can be as competitive as those nonlinear methods. In the process of mapping the data to higher dimensional space, the linear methods will be subject to overfitting and instability of coefficients which can be addressed by penalization methods including Lasso and Elastic-Net. Finally, we'll show how to train linear models with Elastic-Net regularization using MLlib.
Several learning algorithms such as kernel methods, decision tress, and random forests are nonlinear approaches which are widely used to have better performance compared with linear methods. However, with feature engineering techniques like polynomial expansion by mapping the data into a higher dimensional space, the performance of linear methods can be as competitive as those nonlinear methods. As a result, linear methods remain to be very useful given that the training time of linear methods is significantly faster than the nonlinear ones, and the model is just a simple small vector which makes the prediction step very efficient and easy. However, by mapping the data into higher dimensional space, those linear methods are subject to overfitting and instability of coefficients, and those issues can be successfully addressed by penalization methods including Lasso and Elastic-Net. Lasso method with L1 penalty tends to result in many coefficients shrunk exactly to zero and a few other coefficients with comparatively little shrinkage. L2 penalty trends to result in all small but non-zero coefficients. Combining L1 and L2 penalties are called Elastic-Net method which tends to give a result in between. In the first part of the talk, we'll give an overview of linear methods including commonly used formulations and optimization techniques such as L-BFGS and OWLQN. In the second part of talk, we will talk about how to train linear models with Elastic-Net using our recent contribution to Spark MLlib. We'll also talk about how linear models are practically applied with big dataset, and how polynomial expansion can be used to dramatically increase the performance.
DB Tsai is an Apache Spark committer and a Senior Research Engineer at Netflix. He is recently working with Apache Spark community to add several new algorithms including Linear Regression and Binary Logistic Regression with ElasticNet (L1/L2) regularization, Multinomial Logistic Regression, and LBFGS optimizer. Prior to joining Netflix, DB was a Lead Machine Learning Engineer at Alpine Data Labs, where he developed innovative large-scale distributed linear algorithms, and then contributed back to open source Apache Spark project.
Machine Learning for Dummies (without mathematics)ActiveEon
It presents an introduction and the basic concepts of machine learning without mathematics. This is a short presentation for beginners in machine learning.
Incremental and Multi-feature Tensor Subspace Learning applied for Background...ActiveEon
ICIAR'14 - International Conference on Image Analysis and Recognition. Incremental and Multi-feature Tensor Subspace Learning applied for Background Modeling and Subtraction.
Comparison of Matrix Completion Algorithms for Background Initialization in V...ActiveEon
Scene Background Modeling and Initialization (SBMI) Workshop in conjunction with ICIAP 2015.
Comparison of Matrix Completion Algorithms for Background Initialization in Videos
Double-constrained RPCA based on Saliency Maps for Foreground Detection in Au...ActiveEon
Paper Presentation, ISBC 2015 Workshop conjunction with AVSS 2015, Karlsruhe, Germany, 2015.
Double-constrained RPCA based on Saliency Maps for Foreground Detection in Automated Maritime Surveillance
Online Stochastic Tensor Decomposition for Background Subtraction in Multispe...ActiveEon
Background subtraction is an important task for visual surveillance systems. However, this task becomes more complex when the data size grows since the real-world scenario requires larger data to be processed in a more efficient way, and in some cases, in a continuous manner. Until now, most of background subtraction algorithms were designed for mono or trichromatic cameras within the visible spectrum or near infrared part. Recent advances in multispectral imaging technologies give the possibility to record multispectral videos for video surveillance applications. Due to the specific nature of these data, many of the bands within multispectral images are often strongly correlated. In addition, processing multispectral images with hundreds of bands can be computationally burdensome. In order to address these major difficulties of multispectral imaging for video surveillance, this paper propose an online stochastic framework for tensor decomposition of multispectral video sequences (OSTD). First, the experimental evaluations on synthetic generated data show the robustness of the OSTD with other state of the art approaches then, we apply the same idea on seven multispectral video bands to show that only RGB features are not sufficient to tackle color saturation, illumination variations and shadows problem, but the addition of six visible spectral bands together with one near infra-red spectra provides a better background/foreground separation.
Classificação Automática do Estado do Trânsito Utilizando Propriedades Holíst...ActiveEon
Trabalho apresentado ao Programa de Pós-Graduação em Mecatrônica da Universidade Federal da Bahia como requisito parcial para obtenção do grau de Mestre em Mecatrônica.
Maiores informações:
http://on.be.net/SWcCYo
https://repositorio.ufba.br/ri/handle/ri/13300
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
A brief information about the SCOP protein database used in bioinformatics.
The Structural Classification of Proteins (SCOP) database is a comprehensive and authoritative resource for the structural and evolutionary relationships of proteins. It provides a detailed and curated classification of protein structures, grouping them into families, superfamilies, and folds based on their structural and sequence similarities.
Robust Low-rank and Sparse Decomposition for Moving Object Detection
1. Robust Low-rank and Sparse Decomposition for Moving
Object Detection
Andrews Cordolino Sobral
Ph.D. on Computer Vision and Machine Learning
@ University of La Rochelle, France (European doctorate label)
Senior AI Research Engineer
@ ActiveEon, Paris office, France
1
2. Summary
Moving object detection by background modeling and subtraction.
Solved and unsolved challenges.
Framework for low-rank and sparse decomposition.
Some applications of RPCA on:
Background modeling and foreground separation.
Very dynamic background.
Multidimensional and streaming data.
LRSLibrary1
+ demo.
1
https://github.com/andrewssobral/lrslibrary
2
3. Background modeling and subtraction (BMS) process
Model
initialization
Frames
Model update
Background
model
Foreground
detection
3
4. BMS challenges
“Solved” and “unsolved” issues:
Baseline
Shadow
Bad weather
Thermal
Dynamic background
Camera jitter
Intermittent object motion
Turbulence
Low framerate
Night scenes
PTZ cameras
* Pierre-Marc Jodoin. Motion Detection: Unsolved Issues
and [Potential] Solutions. Scene Background Modeling and
Initialization (SBMI), ICIAP, 2015.
4
Play
5. BMS methods
A large number of algorithms have been proposed for background
subtraction over the last few years [Sobral and Vacavant, 2014],
[Bouwmans, 2014], [Xu et al., 2016]:
Traditional methods (several implementations available in BGSLibrary*):
Basic methods (i.e. [Cucchiara et al., 2001])
Statistical methods (i.e. [Stauffer and Grimson, 1999])
Non-parametric methods (i.e. [Elgammal et al., 2000])
Fuzzy based methods (i.e. [Baf et al., 2008])
Neural and neuro-fuzzy methods (i.e. [Maddalena and Petrosino, 2012])
Decomposition into low-rank + sparse components
Introduced in [Cand`es et al., 2011]. In general, the decomposition is done
by matrix and tensor methods. Our focus
* [Sobral, 2013] https://github.com/andrewssobral/bgslibrary.
5
6. Decomposition into low-rank + sparse components
This framework considers that the data (matrix A) to be processed satisfy
two important assumptions:
The inliers (latent structure) are drawn from a single (or a union of)
low-dimensional subspace(s) (matrix L)
The corruptions are sparse (matrix S)
A L
= +
S
6
7. Decomposition into low-rank + sparse components
Note
This assumption holds a particular association to the problem of
background/foreground separation.
A L
= +
S
The process of background/foreground separation can be regarded as a
matrix separation problem.
7
8. Robust Principal Component Analysis (RPCA)
This definition is also known as Robust Principal Component Analysis
(RPCA), and it is formulated as follows:
minimize
L,S
rank(L) + card(S),
subject to A = L + S,
(1)
where rank(L) represents the rank of L and card(S) denotes the
number of non-zero entries of S.
The low-rank minimization concerning L offers a suitable framework for
background modeling due to the high correlation between frames.
However, the above equation yields a highly non-convex optimization
problem (NP-hard).
8
9. RPCA via Principal Component Pursuit (PCP)
[Cand`es et al., 2011] showed that L and S can be recovered by solving a
convex optimization problem, named as Principal Component Pursuit
(PCP).
The card(.) is replaced with the 1-norm and the rank(.) with the nuclear
norm* ||.||∗, yielding the following convex surrogate:
minimize
L,S
||L||∗ + λ||S||1,
subject to A = L + S,
(2)
where λ > 0 is a trade-off parameter between the sparse and the
low-rank regularization.
The minimization of ||L||∗ enforces low-rankness in L, while the minimization of ||S||1
maximize the sparsity in S.
* Sum of singular values.
9
10. RPCA limitations
However, the RPCA via PCP has some limitations:
Low-rank component = exactly low-rank.
Sparse component = exactly sparse.
The input matrix is considered as the sum of a true low-rank matrix plus a true sparse
matrix.
That’s not all...
10
11. RPCA challenges (outliers)
In real applications the observations are often corrupted by noise, and
missing data can occurs.
11
12. RPCA challenges (design)
Moreover, designing a RPCA algorithm needs to address some of the
following questions:
Decomposition: Decompose the input data into one, two, or more terms.
Convexity, norms and constraints: Is there a suitable norm or constraint for each
term? Use a convex surrogate norm or not?
Loss function and regularization: Is there a suitable loss function that is globally
continuous and differentiable? Is there a suitable regularization to improve the
learned model?
Solvers: How to design an efficient optimization algorithm that is faster and more
scalable? online or offline?
Multidimensionality: How to represent the input data?
...and taking into the BMS constraints!
In summary
Designing an efficient RPCA algorithm for background/foreground separation need to
take into account the BMS challenges and the mathematical issues of RPCA.
12
13. RPCA methods
A large number of approaches for robust low-rank and sparse modeling have been
proposed in the last few years ([Zhou et al., 2014], [Lin, 2016],
[Davenport and Romberg, 2016], and [Bouwmans et al., 2016]).
2010–2011 2011–2012 2012–2013 2013–2014 2014–2015 2015–2016
200
400
600
800
1,000
1,200
1,400
# of citations of [Cand`es et al., 2011].
In [Bouwmans et al., 2016], more than 300 papers addressed the problem of
background/foreground separation.
Some key issues and challenges remain, such as handling complex/dynamic background
scenarios and performing in a incremental / real-time manner.
13
15. Decomposition into Low-rank and Sparse Matrices (DLSM)
A unified model is proposed to represent the state-of-the-art methods in
a more general framework, named DLSM (Decomposition into Low-rank
and Sparse Matrices) [Bouwmans, Sobral et al., 2016].
The DLSM framework categorizes the matrix separation problem into
three main approaches: implicit, explicit and stable.
and it is formulated as follows:
A =
Y
y=1
Ky (3)
where, in most of the cases, Y ∈ {1, 2, 3}.
15
16. Implicit approaches (Y = 1)
The first matrix K1 is the best low-rank approximation (e.g. K1 = L) of
the matrix A, where A ≈ L.
This is an “implicit decomposition” due to the fact that we have any
constraint with respect to the foreground objects.
The residual matrix S (sparse or not) is recovered by S = A − L.
e.g. Low-Rank Approximation (LRA).
16
17. Low-Rank Approximation (LRA)
LRA is formulated as:
minimize
L
f (A − L),
subject to rank(L) = r,
(4)
where f (.) denotes a loss function (i.e. ||.||2
F ) and r (1 ≤ r < rank(A)) is the desired rank.
)]kF(. . . vec)1F(vec= [A
kF. . .1FframeskSequence of background modelskSequence of
i
Tviσiu=1i
r
=rA
(rank-1 approximation)1A
Input matrix (full rank) Low-rank approximation
A closed form solution can be estimated by computing the “truncated” Singular Value
Decomposition (SVD) of A.
17
18. Limitations of LRA
LRA is formulated as:
minimize
L
f (A − L),
subject to rank(L) = r,
(4)
where f (.) denotes a loss function and r (1 ≤ r < rank(A)) represents the desired rank.
18
19. Affine rank minimization
In many applications, we need to recover a minimal rank matrix subject to
some problem-specific constraints, often characterized as an affine set.
This affine rank minimization problem is defined as follows:
minimize
L
rank(L),
subject to A(L) = b,
(5)
where A : Rm×n → Rp denotes a linear mapping and b ∈ Rp represents a
vector of observations of size p.
19
20. Matrix Completion (MC)
In many applications, we need to recover a minimal rank matrix subject to
some problem-specific constraints, often characterized as an affine set.
This affine rank minimization problem is defined as follows:
minimize
L
rank(L),
subject to A(L) = b,
(5)
where A : Rm×n → Rp denotes a linear mapping and b ∈ Rp represents a
vector of observations of size p.
A special case of problem (5) is the matrix completion problem:
minimize
L
rank(L),
subject to PΩ(L) = PΩ(A),
(6)
where PΩ(.) denotes a sampling operator restricted to the elements of Ω
(set of observed entries). Let’s take an example!
20
21. MC for Background Model Estimation
Conceptual illustration
A ).(ΩPSampling operator )A(ΩP
,)A(ΩP) =L(ΩPsubject to
,∗||L||
L
minimize
L
Application to background model estimation
A ).(ΩPSampling operator )A(ΩP
,)A(ΩP) =L(ΩPsubject to
,∗||L||
L
minimize
L
21
22. Explicit approaches (Y = 2)
The matrices K1 = L and K2 = S are usually assumed to be the low-rank
and sparse representation of the data, where A ≈ L + S.
This is an “explicit decomposition” due to the fact that we have two
constraints:
the first one enforcing a low-rank structure over the matrix L, and
the second one enforcing a sparse structure over the matrix S.
Explicit approaches usually work better for the problem of
background/foreground separation in comparison to the implicit methods.
e.g. Robust Principal Component Analysis (RPCA) proposed by [Cand`es et al., 2011].
22
23. Background/foreground separation with RPCA via PCP
Components
Video Low-rank Sparse Foreground
Background model Moving objects Classification
Demo
23
24. Stable approaches (Y = 3)
The matrices K1 = L, K2 = S and K3 = E are usually assumed to be the
low-rank, sparse and noise components, respectively, where
A ≈ L + S + E.
This decomposition is called “stable decomposition” as it separates the
sparse components in S and the noise in E.
In the case of background/foreground separation, the noise matrix E can
also represent some dynamic properties of the background.
e.g. Stable Principal Component Pursuit (Stable PCP) proposed by Zhou et
al. [Zhou et al., 2010].
24
25. PCP vs Stable PCP
Input video RPCA via PCP RPCA via Stable PCP
Visual comparison of foreground segmentation between PCP and Stable PCP for
dynamic background.
25
26. Stable PCP for dynamic background scenes
Stable PCP try to deal with this problem under the term where the
multi-modality of the background (i.e. waves) can be considered as
noise component (E).
Some authors used an additional constraint to improve the
background/foreground separation:
[Oreifej et al., 2013] used a turbulence model driven by dense optical flow to
enforce an additional constraint on the rank minimization.
[Ye et al., 2015] proposed a robust motion-assisted matrix restoration (RMAMR)
where a dense motion field given by optical flow is mapped into a weighting matrix.
[Sobral et al., 2015b] presented a double-constrained RPCA (SCM-RPCA), where
the sparse component is constrained by shape and confidence maps both extracted
from spatial saliency maps.
26
27. SCM-RPCA on very dynamic background
Input Background Foreground
minimize
L,S,E
||L||∗+λ1||Π(S)||1+λ2||E||2
F
subject to A = L + W ◦ S + E
[Sobral et al., 2015b]
27
28. SCM-RPCA over MarDT dataset
Input frame Saliency map Sparse component Foreground mask Ground truthLow-rank component
For the MarDT scenes, the temporal median of the saliency maps was subtracted, due
to the high saliency from the buildings around the river.
Related publication: (IEEE AVSS, 2015, [Sobral et al., 2015b]).
MATLAB code: https://sites.google.com/site/scmrpca.
28
30. Multispectral data
Usually a multispectral video consists of a sequence of multispectral
images sensed from contiguous spectral bands.
Each multispectral image can be represented as a three-dimensional data
cube, or tensor.
Processing a sequence of multispectral images with hundreds of bands
can be computationally expensive.
30
31. Limitations of matrix-based approaches
Matrix-based low-rank and sparse decomposition methods work only on a
single dimension and consider the input frame as a vector.
Multidimensional data for efficient analysis can not be considered.
The local spatial information is lost and erroneous foreground regions
can be obtained.
Some authors used a tensor representation to solve this
problem [Li et al., 2008, Hu et al., 2011, Tran et al., 2012,
Tan et al., 2013, Sobral et al., 2014, Sobral et al., 2015c].
31
32. Tensor decomposition and factorization
Tensor decompositions have been widely studied and applied to many
real-world problems [Kolda and Bader, 2009].
They were used to design low-rank approximation algorithms for
multidimensional arrays taking full advantage of the multi-dimensional
structures of the data.
Three widely-used models for low rank decomposition on tensors are:
Tucker/Tucker3 decomposition.
CANDECOMP/PARAFAC (CP) decomposition.
Tensor Robust PCA decomposition.
32
33. Tucker vs CP decomposition
Tucker decomposition
CP decomposition
33
34. RPCA on tensors
Some authors extended the Robust PCA framework for matrices to the
multilinear case [Goldfarb and Qin, 2014, Lu et al., 2016].
Tensor Robust PCA decomposition
The RPCA for matrices was reformulated into its “tensorized” version. For
an N-order tensor X, it can be decomposed as:
X = L + S + E, (7)
where L, S and E represent the low-rank, sparse and noise tensors.
34
35. Extending a matrix-based RPCA to tensors
Stochastic RPCA on matrices [Feng et al., 2013]:
minimize
W,H,S
1
2
||X − WHT
− S||2
F +
λ1
2
(||W||2
F +||H||2
F ) + λ2||S||1,
subject to L = WHT
.
(8)
Extension for tensors (OSTD) [Sobral et al., 2015c]:
minimize
W,H,S
1
2
N
i=1
||X[i]
− Wi HT
i − S[i]
||2
F +
λ1
2
(||Wi ||2
F +||Hi ||2
F ) + λ2||S[i]
||1,
subject to L[i]
= Wi HT
i .
(9)
X[n]
: n-mode matricization of tensor X.
Xi : ith matrix.
35
36. Comparison
OSTD was compared with 3 other ones:
CP-ALS [Kolda and Bader, 2009]
HORPCA [Goldfarb and Qin, 2014]
BRTF [Zhao et al., 2016]
CP-ALS, HORPCA, and BRTF are based on batch optimization strategy.
A total of 100 frames at time were used to reduce the computational cost
for the whole video sequence (fourth-order tensor).
OSTD processes each multispectral image or RGB image per time
instance.
The algorithms were evaluated on MVS dataset [Benezeth et al., 2014], the first dataset for
background subtraction on multispectral video sequences.
36
37. Qualitative results I
RGB image ground truth proposed approach BRTF HORPCA CP-ALS
Visual comparison of background subtraction results over three scenes of
the MVS dataset.
37
40. Computational time
Computational time for the first 100 frames varying the image resolution.
Size HORPCA CP-ALS BRTF OSTD
160 × 120 00:01:35 00:00:40 00:00:22 00:00:04
320 × 240 00:04:56 00:02:09 00:03:50 00:00:12
The algorithms were implemented in MATLAB running on a laptop
computer with Windows 7 Professional 64 bits, 2.7 GHz Core i7-3740QM
processor and 32Gb of RAM.
OSTD achieved almost real time processing, since one video frame is
processed at time.
Related publications:
(IEEE ICCV Workshop on RSL-CV, 2015, [Sobral et al., 2015c]).
MATLAB code: https://github.com/andrewssobral/ostd.
40
42. LRSLibrary
LRSLibrary [Sobral et al., 2016]a
provides a collection of low-rank and sparse
decomposition algorithms in MATLAB.
a
https://github.com/andrewssobral/lrslibrary
The LRSLibrary was designed for background/foreground separation in videos, and it
contains a total of 104 matrix-based and tensor-based algorithms.
42
44. Publications I
This presentation was based on the following publications2:
Talks (1)
2016 - Sobral, Andrews. “Recent advances on low-rank and sparse decomposition for
moving object detection.”. Workshop/atelier: Enjeux dans la d´etection d’objets mobiles
par soustraction de fond. Reconnaissance de Formes et Intelligence Artificielle (RFIA),
20163.
Journal papers (3)
2016 - Sobral, Andrews; Zahzah, El-hadi. “Matrix and Tensor Completion Algorithms for
Background Model Initialization: A Comparative Evaluation”, In the Special Issue on
Scene Background Modeling and Initialization (SBMI), Pattern Recognition Letters
(PRL), 2016. [Sobral and Zahzah, 2016].
2016 - Gong, Wenjuan; Zhang, Xuena; Gonzalez, Jordi; Sobral, Andrews; Bouwmans,
Thierry; Tu, Changhe; Zahzah, El-hadi. “Human Pose Estimation from Monocular
Images: A Comprehensive Survey”, Sensors, 2016. [Gong et al., 2016].
44
45. Publications II
2016 - Bouwmans, Thierry; Sobral, Andrews; Javed, Sajid; Ki Jung, Soon; Zahzah,
El-Hadi. “Decomposition into Low-rank plus Additive Matrices for
Background/Foreground Separation: A Review for a Comparative Evaluation with a
Large-Scale Dataset”, Computer Science Review, 2016. [Bouwmans et al., 2016].
Book chapters (1)
2015 - Sobral, Andrews; Bouwmans, Thierry; Zahzah, El-hadi. “LRSLibrary: Low-Rank
and Sparse tools for Background Modeling and Subtraction in Videos”. Chapter in the
handbook “Robust Low-Rank and Sparse Matrix Decomposition: Applications in Image
and Video Processing”, CRC Press, Taylor and Francis Group, 2015. [Sobral et al., 2016].
Conferences (7)
2015 - Sobral, Andrews; Javed, Sajid; Ki Jung, Soon; Bouwmans, Thierry; Zahzah,
El-hadi. “Online Stochastic Tensor Decomposition for Background Subtraction in
Multispectral Video Sequences”. ICCV Workshop on Robust Subspace Learning and
Computer Vision (RSL-CV), Santiago, Chile, December, 2015. [Sobral et al., 2015c].
45
46. Publications III
2015 - Javed, Sajid; Ho Oh, Seon; Sobral, Andrews; Bouwmans, Thierry; Ki Jung, Soon.
“Background Subtraction via Superpixel-based Online Matrix Decomposition with
Structured Foreground Constraints”. ICCV Workshop on Robust Subspace Learning and
Computer Vision (RSL-CV), Santiago, Chile, December, 2015. [Javed et al., 2015a].
2015 - Sobral, Andrews; Bouwmans, Thierry; Zahzah, El-hadi. ”Comparison of Matrix
Completion Algorithms for Background Initialization in Videos”. Scene Background
Modeling and Initialization (SBMI), Workshop in conjunction with ICIAP 2015, Genova,
Italy, September, 2015. [Sobral et al., 2015a].
2015 - Sobral, Andrews; Bouwmans, Thierry; Zahzah, El-hadi. “Double-constrained
RPCA based on Saliency Maps for Foreground Detection in Automated Maritime
Surveillance”. Identification and Surveillance for Border Control (ISBC), International
Workshop in conjunction with AVSS 2015, Karlsruhe, Germany, August,
2015. [Sobral et al., 2015b].
2015 - Javed, Sajid; Sobral, Andrews; Bouwmans, Thierry; Ki Jung, Soon. “OR-PCA
with Dynamic Feature Selection for Robust Background Subtraction”. In Proceedings of
the 30th ACM/SIGAPP Symposium on Applied Computing (ACM-SAC), Salamanca,
Spain, 2015. [Javed et al., 2015b].
2014 - Javed, Sajid; Ho Oh, Seon; Sobral, Andrews; Bouwmans, Thierry; Ki Jung, Soon.
“OR-PCA with MRF for Robust Foreground Detection in Highly Dynamic Backgrounds”.
In the 12th Asian Conference on Computer Vision (ACCV 2014), Singapore, November,
2014. [Javed et al., 2014].
46
47. Publications IV
2014 - Sobral, Andrews; Baker, Christopher G.; Bouwmans, Thierry; Zahzah, El-hadi.
“Incremental and Multi-feature Tensor Subspace Learning applied for Background
Modeling and Subtraction”. International Conference on Image Analysis and Recognition
(ICIAR’2014), Vilamoura, Algarve, Portugal, October, 2014. [Sobral et al., 2014].
2
The reader can refer to https://scholar.google.fr/citations?user=0Nm0uHcAAAAJ for an updated list of publications
and citations.
3
http://rfia2016.iut-auvergne.com/index.php/autres-evenements/
detection-d-objets-mobiles-par-soustraction-de-fond
47
48. [Baf et al., 2008] Baf, F. E., Bouwmans, T., and Vachon, B. (2008). Fuzzy integral for moving object detection. In IEEE
International Conference on Fuzzy Systems, pages 1729–1736.
[Benezeth et al., 2014] Benezeth, Y., Sidibe, D., and Thomas, J. B. (2014). Background subtraction with multispectral video
sequences. In International Conference on Robotics and Automation (ICRA).
[Bouwmans, 2014] Bouwmans, T. (2014). Traditional and recent approaches in background modeling for foreground detection:
An overview. In Computer Science Review.
[Bouwmans et al., 2016] Bouwmans, T., Sobral, A., Javed, S., Jung, S. K., and Zahzah, E. (2016). Decomposition into
low-rank plus additive matrices for background/foreground separation: A review for a comparative evaluation with a
large-scale dataset. Computer Science Review.
[Cand`es et al., 2011] Cand`es, E. J., Li, X., Ma, Y., and Wright, J. (2011). Robust Principal Component Analysis? Journal of
the ACM.
[Cucchiara et al., 2001] Cucchiara, R., Grana, C., Piccardi, M., and Prati, A. (2001). Detecting objects, shadows and ghosts in
video streams by exploiting color and motion information. In Proceedings 11th International Conference on Image Analysis
and Processing, pages 360–365.
[Davenport and Romberg, 2016] Davenport, M. A. and Romberg, J. (2016). An overview of low-rank matrix recovery from
incomplete observations. IEEE Journal of Selected Topics in Signal Processing, 10(4):608–622.
[Elgammal et al., 2000] Elgammal, A. M., Harwood, D., and Davis, L. S. (2000). Non-parametric model for background
subtraction. In Proceedings of the 6th European Conference on Computer Vision-Part II, ECCV ’00, pages 751–767, London,
UK, UK. Springer-Verlag.
[Feng et al., 2013] Feng, J., Xu, H., and Yan, S. (2013). Online robust PCA via stochastic optimization. In Advances in Neural
Information Processing Systems (NIPS).
[Goldfarb and Qin, 2014] Goldfarb, D. and Qin, Z. T. (2014). Robust low-rank tensor recovery: Models and algorithms. SIAM
Journal on Matrix Analysis and Applications.
[Gong et al., 2016] Gong, W., Zhang, X., Gonzalez, J., Sobral, A., Bouwmans, T., Tu, C., and Zahzah, E. (2016). Human pose
estimation from monocular images: A comprehensive survey. Sensors, 16(12).
[Hu et al., 2011] Hu, W., Li, X., Zhang, X., Shi, X., Maybank, S., and Zhang, Z. (2011). Incremental tensor subspace learning
and its applications to foreground segmentation and tracking. International Journal of Computer Vision (IJCV).
48
49. [Baf et al., 2008] Baf, F. E., Bouwmans, T., and Vachon, B. (2008). Fuzzy integral for moving object detection. In IEEE
International Conference on Fuzzy Systems, pages 1729–1736.
[Benezeth et al., 2014] Benezeth, Y., Sidibe, D., and Thomas, J. B. (2014). Background subtraction with multispectral video
sequences. In International Conference on Robotics and Automation (ICRA).
[Bouwmans, 2014] Bouwmans, T. (2014). Traditional and recent approaches in background modeling for foreground detection:
An overview. In Computer Science Review.
[Bouwmans et al., 2016] Bouwmans, T., Sobral, A., Javed, S., Jung, S. K., and Zahzah, E. (2016). Decomposition into
low-rank plus additive matrices for background/foreground separation: A review for a comparative evaluation with a
large-scale dataset. Computer Science Review.
[Cand`es et al., 2011] Cand`es, E. J., Li, X., Ma, Y., and Wright, J. (2011). Robust Principal Component Analysis? Journal of
the ACM.
[Cucchiara et al., 2001] Cucchiara, R., Grana, C., Piccardi, M., and Prati, A. (2001). Detecting objects, shadows and ghosts in
video streams by exploiting color and motion information. In Proceedings 11th International Conference on Image Analysis
and Processing, pages 360–365.
[Davenport and Romberg, 2016] Davenport, M. A. and Romberg, J. (2016). An overview of low-rank matrix recovery from
incomplete observations. IEEE Journal of Selected Topics in Signal Processing, 10(4):608–622.
[Elgammal et al., 2000] Elgammal, A. M., Harwood, D., and Davis, L. S. (2000). Non-parametric model for background
subtraction. In Proceedings of the 6th European Conference on Computer Vision-Part II, ECCV ’00, pages 751–767, London,
UK, UK. Springer-Verlag.
[Feng et al., 2013] Feng, J., Xu, H., and Yan, S. (2013). Online robust PCA via stochastic optimization. In Advances in Neural
Information Processing Systems (NIPS).
[Goldfarb and Qin, 2014] Goldfarb, D. and Qin, Z. T. (2014). Robust low-rank tensor recovery: Models and algorithms. SIAM
Journal on Matrix Analysis and Applications.
[Gong et al., 2016] Gong, W., Zhang, X., Gonzalez, J., Sobral, A., Bouwmans, T., Tu, C., and Zahzah, E. (2016). Human pose
estimation from monocular images: A comprehensive survey. Sensors, 16(12).
[Hu et al., 2011] Hu, W., Li, X., Zhang, X., Shi, X., Maybank, S., and Zhang, Z. (2011). Incremental tensor subspace learning
and its applications to foreground segmentation and tracking. International Journal of Computer Vision (IJCV).
49