SlideShare a Scribd company logo
1 of 7
Download to read offline
000
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
065
066
067
068
069
070
071
072
073
074
075
076
077
078
079
080
081
082
083
084
085
086
087
088
089
090
091
092
093
094
095
096
097
098
099
100
101
102
103
104
105
106
107
CVPR
#LL
CVPR
#LL
CVPR 2016 Submission #LL. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.
A Study on Face Alignment Algorithms
Anonymous CVPR submission
Paper ID LL
Abstract
This paper studies recent development in face align-
ment. We summarize and critique three regression based
face alignment algorithms. We also implement the super-
vised descent method. Our implementation gives reason-
able results compared to the one reported by the author.
1. Introduction
Face alignment, also known as facial feature detection,
refers to locating p facial landmarks x ∈ Rp
(such as eye
brows and corners of mouth) in a face image I ∈ Rm
of m pixels. Recently, regression based face alignment is
showing promising results. Basically, those methods try to
find a series of regression matrix Rk
, k = 1, 2, · · · that
maps the image data to the movement of landmarks ∆xk
such the initial landmarks x0
are refined progressively by
xk
= xk−1
+ ∆xk
to find the optimal landmarks loca-
tions x∗
. In this paper, we review three regression based
methods[14][10][13] and implement one of them[14].
2. Summarize and Critique
2.1. SDM:Supervised descent method
2.1.1 Problem formulation and algorithm
Supervised descent method[14] formulates the face align-
ment problem as a minimization problem
∆x∗
= argmin
∆x
f(∆x) = argmin
∆x
h(I(x0 + ∆x)) − φ∗
2
2
(1)
where h(I(x)) is the SIFT[9] feature extraction function
that calculates feature values of image I at landmarks x and
φ∗ = h(I(x∗
)) denotes the SIFT values at the ground truth
landmarks x∗
.
One classic method to solve minimization problems is
the Newton’s method [3]. Yet Newton’s method requires to
compute the inverse of the Hessian of the function, which
is not only computational expensive for large data size but
also infeasible for non-differentiable functions, such as the
SIFT operator here.
To overcome the limitation of Newton’s method, in-
stead of calculating the descent direction using the Hessian,
supervised descent method learns such descent directions
from training data. More specifically, the algorithm learns
a sequence of mapping matrix Rk that maps the local fea-
tures φ at landmark x to the motion of landmarks ∆x from
training dataset {Ii} with known landmarks {x∗
i }
argmin
Rk,bk
Ii xi
k
∆xki
∗ − Rk
φk
i − bk 2
(2)
and updates the location of landmarks by
xk
= xk−1
+ Rk−1
φk−1
+ bk−1
(3)
where ∆xki
∗ = x∗
i − xki
is the displacement between the
ground truth landmarks and the estimated landmarks at the
kth
stage for image i.
2.1.2 Evaluation and results
The algorithm is first validated on simple analytic functions
and compared to Newtons method. Then the algorithm was
tested on the LFPW datasets[2][12] and evaluated by the
distance between the landmarks found by the algorithm and
the ground truth landmarks provided. Finally the algorithm
was tested for facial feature tracking on video dataset[1][7]
where the algorithm tries to detect facial landmarks in each
frame.
Validation on analytic function shows the algorithm con-
verges faster than Newton’s method experimentally. Re-
sults on the LFPW dataset show the supervised descent
method outperforms two other recently proposed meth-
ods and the algorithm is also reported to work well on
video data (though no quantitative results are provided).
The method can handle images with illumination and pose
changes pretty well. Yet for face images with extreme posi-
tion change (such as a side face) and sever occlusion, the al-
gorithm shows large error as compared to the ground truth.
1
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
CVPR
#LL
CVPR
#LL
CVPR 2016 Submission #LL. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.
2.2. LBF:Local binary features regression
2.2.1 Problem formulation and algorithm
SDM uses the classic SIFT method for feature extraction.
LBF[10] proposes to improve the accuracy of face align-
ment by learning local features from training data instead.
The argument is that such features are more adaptive to
specific task. By learning features from a local region the
method can take advantage of more discriminative features
and save computational effort by learning each feature in-
dependently in a local region. The noises in the learned fea-
tures are suppressed by using a global regression that maps
all learned features together to give the optimal motion of
landmarks,
To learn for the local features, the method takes a region
around the lth landmarks and learns the local features φl in
the region by solving
argmin
Rk
l ,φk
l Ii
πl ◦ ∆xki
∗ − Rk
l φk
l (Ii, xk
i ) 2
2 (4)
where πl◦ denotes the operator that gets the l th element
of ∆xki
∗ . Comparing to Eq(2) in the supervised descent
method, this method jointly learns a local feature φk
l and
a local mapping Rk
l instead of using SIFT to calculate φk
for all landmark locations and learning a global mapping
Rk
directly.
The author solves the learning problem in Eq(4) using
the standard regression random forest[4]. During testing,
each sample will traverse the trees until it reaches on leaf
node for each tree. Each dimension in φk
l is 1 if the sam-
ple reaches the corresponding leaf node and 0 otherwise,
resulting in a binary local feature that is highly sparse.
To suppress noise in the local feature, the algorithm dis-
cards the local mapping function Rk
l learns in Eq(4) and
instead learns a global mapping Rk
by solving the problem
that is similar with Eq(2), except that the features used is the
learned local binary features rather than SIFT features and a
L2 regularization on Rk
is needed as the learned local fea-
tures have a dimension much higher than SIFT. However,
due to the sparse nature of the features, the problem can be
solved easily using a dual coordinate descent method[6].
2.2.2 Evaluation and results
The algorithm is evaluated on the LFPW datasets [2][12]the
same as the one used in SDM and two other datasets,
”Hellen”[8] and ”300-W”[11] that have richer images and
landmarks. The results show that the algorithm achieves
comparable accuracy on the first two datasets as compared
to other methods including SDM and higher accuracy on
the third dataset which includes the first two as well as other
challenging images. The paper shows sample results of both
SDM and LBF. SDM fails on some images due to large
pose variation while LBF detects most landmarks correctly.
However, for images with poor illumination condition or
unusual expressions (such as a widely opened mouth), both
SDM and LBF fail.
Also, the speed of different algorithms is compared and
the proposed method is the fastest method that runs at 300+
frames per second (FPS). The author further evaluates a
faster version with smaller feature size which runs at 3000
FPS while achieving comparable accuracy as compared to
other methods.
2.3. HPO:Face alignment under poses and occlusion
Both SDM and LBF treat all landmarks equally without
considering occlusions and head poses explicitly. To han-
dle occlusion, HPO [13] proposes to take the visibility of
landmarks into account and learns a model to predict the
probability of landmark visibility. Such model is then used
to assist the face alignment by adding the visibility infor-
mation to local features and shape configuration around a
landmark.It also proposes to unify pose variation and oc-
clusion by treating invisible landmarks due to pose change
as a special case of occlusion.
First, the algorithm learns a function from training
dataset as well as synthetic data that gives a prior proba-
bility of possible occlusion patterns. For example, occlu-
sion usually happens in a block manner (Fig 1 a) where an
entire block of the face is missing while it is very unlikely
to have a face image where every other pixel is occluded
(like a checkboard in Fig 1 b ). Fig 1 shows example syn-
thetic data that models possible occlusion patterns and an-
other less likely block patterns. The probability function
Loss(c), where c represents the probability of each pos-
sible occlusion patterns to occur in an image of m pixels
and it is a vector of length 2m
. After learning Loss(c), the
probability of landmark visibility is learned by optimizing
argmin
∆pk,T k
∆pk
− Tk
φ(I, xk−1
) 2
2 + λEpk [Loss(c)]
s.t. pk
= pk−1
+ ∆pk
(5)
The algorithm solves the problem in Eq(5) iteratively by
optimizing the objective function with respect to ∆p and T.
Solving for T is a simple least square problem while solving
for ∆p is more complicated and gradient descent method is
used.
Given the predicted landmark visibility probabilities, the
algorithm updates the landmark locations by learning the
descent direction Rk
that maps local image features to
movement of landmarks. For occluded landmarks, the re-
gion around them is not facial part thus the local appearance
around them should be less useful in predicting landmark
locations. Therefore, the local features around landmark xk
2
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
CVPR
#LL
CVPR
#LL
CVPR 2016 Submission #LL. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.
(a) Natural Occlusion (b) Checkboard-like Occlusion
Figure 1: Example synthetic data shows different occlusion
patterns (gray blocks). Small numbers indicate facial land-
marks.
is weighed by the corresponding visibility probability pk
.
Mathematically, this modifies Eq(2) to
argmin
Rk,bk
Ii xi
k
∆xki
∗ − Rk
[pk
φk
i ] 2
(6)
where the operator denotes the product between the visi-
bility probability and the feature vector around each of the
landmark.
2.3.1 Evaluation and Results
The algorithm was evaluated on three datasets, the LFPW
dataset[2][12] which is used by both SDM and LBF, the
Helen dataset[8] used by LBF and a new dataset that has
more occluded faces and larger pose variations[5]. The pro-
posed algorithm achieves comparable accuracy in terms of
the distance between ground truth and detected landmarks
on the first two datasets while outperforming SDM and LBF
on the third dataset. In general, the algorithm is able to
detect occluded landmarks and handle pose variance better
than SDM . However, the algorithm still fails for extreme
head poses. Besides, example images of occlusion detec-
tion shown in the paper all have a nearly fronted pose, which
should reduce the complexity of occlusion detection. These
observations suggest that the power of the occlusion model-
ing and the idea of viewing side faces as a case of occlusion
is still limited.
In terms of computational efficiency, the algorithm is
slower than the previous two methods. The proposed al-
gorithm runs at 2 FPS while the SDM and LBF runs at 160
and 460 FPS respectively on the LFPW data base.
2.4. Discussion and Comparison
The three algorithms all aim at solving face alignment
problem using regression method and achieve promising re-
sults. Table 1 summarizes the quantitative results (mean ab-
solute error/FPS) obtained by the 3 algorithms on different
datasets, ”NA” indicates no result reported.
LFPW Helen 300-W COFW
SDM 3.49/160 5.85/21 7.52/70 6.69/NA
LBF 3.35/460 5.41/200 6.32/320 NA
HPO 3.93/2 5.49/2 NA 5.18/2
Table 1: Summary of quantitative results of different algo-
rithms
SPM proposes a general frame work to solve face align-
ment as an optimization problem without calculating the in-
verse of Hessian. The method is general to handle different
feature extraction functions as it does not impose any spe-
cific requirement such as differentiability. Also, the learn-
ing process requires solving a simple least square problem
only and appears to have a very fast convergence rate (4 or
5 steps) experimentally.
However, the underlying assumption for Rk to be the
optimal descent directions is that the entire dataset repre-
sents a set of similar functions with similar descent direc-
tions. Under this assumption, it is possible to average over
the training dataset and learn the descent directions that are
optimal for both training and testing data. If this assump-
tion does not hold, the algorithm can fail easily. Consider a
simple case where we sample two points of a function that
have opposite descent direction as training samples, aver-
aging over them will learn a 0 direction that leads the opti-
mization no where. This should also be the reason that the
algorithm fails on faces with extreme poses. Therefore, it
is very important to preprocess the dataset to make sure the
dataset is homogeneous. Details of such preprocessing will
be described in the experimental section.
LBF extends the SDM by learning local binary features,
which contributes to both computational efficiency and ac-
curacy of the algorithm. On the other hand, learning local
features also adds on model complexity and requires more
parameter tuning, such as the size of the local region for
the local feature learning process, which varies at differ-
ent training stages. The author proposes to perform cross-
validation to determine the optimal model parameters yet it
is not clear in the paper how stable the parameters are across
different datasets and how robust the method is to different
parameters.
HPO aims at resolving variation of head poses and oc-
clusions, which is a major challenge in face alignment and
proposes to view the two problems in a unified way. The
results show the method’s efficacy in detecting occluded
points and various poses yet the power still seems limited
for some extreme cases.
3
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
CVPR
#LL
CVPR
#LL
CVPR 2016 Submission #LL. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.
3. Implementation of SDM
3.1. Face Detection with OpenCV
SDM algorithm requires predefined face location. The
face is detected with OpenCV’s Haar featured-based cas-
cade classifier. The face detection rate is 82.3% on LFPW
dataset. We discarded those images that cannot be detected
and ended up with 434 training images and 120 test im-
ages. The output from OpenCV’s face detector is a rectan-
gle which contains the region of the face. The face rectangle
will later be used as a reference to normalize the dataset and
initialize the algorithm.
3.2. Calculate Mean Face
The face detection information gathered from the previ-
ous step is used to center and normalize the face images
as well as the labeled landmarks. We normalized the face
rectangle to 200 pixels by 200 pixels and take 100 pixels
of padding from each side, since some of the labeled land-
marks is outside the face rectangle. We then used the nor-
malized landmarks position to generate the initial mean face
by calculating the mean position value of each landmark
from all images in the training dataset. This initial mean
face will be used in the initialization step of both learning
and predicting stage.
3.3. Learning Stage
3.3.1 Aligned Landmarks with the Face Position
The first step of SDM is generating the initial estimation of
facial landmarks based on the mean face shape we calcu-
lated in previous step and the variance of scale and trans-
lation of training face images. In our implementation, we
first resized the image based on the face size detected by
OpenCVs face detector; therefore initial face shape will be
the same across different images.
3.3.2 Extract HoG features from Landmarks
The next step is extracting the HoG features from patches
around the landmarks with size 32 pixels by 32 pixels. The
HoG features is equivalent to the SIFT features of fixed
scale and predefined local patch as described in the origi-
nal paper. The HoG features Given an image d ∈ m×1
of m pixels and x ∈ p×1
indexes p landmarks in the im-
age, we generated a feature vector φ(di
(x)) ∈ 128×1
per
landmark from the patch. The generated features for each
landmarks of the image is than concatenate into a vector
φ(di
(x)) ∈ 128p×1
, where p is the number of landmarks.
For the purpose of matrix calculation in MATLAB, we con-
catenate all feature vectors in the training dataset into a ma-
trix Φ ∈ n×128p
, where n is the number of images in train-
ing dataset and each row of the matrix is the transpose of
corresponding feature vector of that image.
Φ =









φ(d1
(x))T
φ(d2
(x))T
φ(d3
(x))T
...
φ(dn−1
(x))T
φ(dn
(x))T









(7)
3.3.3 Perform SDM Regression
The learning for SDM follows the Eq(2) and Eq(3). The
landmarks are predicted by a generic linear combination.
The SDM will learn a sequence of generic descent direction
Rk and bias term bk. To learn the generic descent direction
Rk and bias term bk, we minimized Eq(2), which is a well-
known linear least squares problem which can be solved in
closed form. To reformulate the equation in a matrix form,
we got
D = ΦA (8)
A =
bk
Rk (9)
D =









∆x1
∆x2
∆x3
...
∆xn−1
∆xn









, Φ =









1 φ1
1 φ2
1 φ3
...
...
1 φn−1
1 φn









(10)
where D ∈ n×2p
, Φ ∈ n×(128p+1)
and A ∈
(128p+1)×2p
. The dimension of D is n × 2p is because we
concatenate the x and y coordinate of the landmarks into a
row vectors, thus the length becomes two times the number
of landmarks. The close form solution for Eq(8) is
A = (ΦT
Φ)−1
ΦT
D (11)
One important implementation detail to notice is that to
perform the learning, the landmarks position need to be
normalized to a region of an unit square, i.e.∆ˆxk
=
∆xk
/s,where s is calculated as the distance between the
minimal and maximal coordinates among all landmarks xk
.
3.3.4 Recalculate the updated landmarks
After we learned the new Rk
and bk
, we can use these pa-
rameters to predict the new landmarks position by comput-
ing Eq(3). Since we normalized the landmark position to
a region of an unit square, we need to remapping the ∆xk
from Eq (1). back to its original dimension.
4
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
CVPR
#LL
CVPR
#LL
CVPR 2016 Submission #LL. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.
3.3.5 Iterate the Learning Process
The new landmark position we got from 3.3.4 are now the
new initial land marks, and we can repeat the learning pro-
cess from 3.3.2 to 3.3.4 several times, and store the Rk
and
bk
value in each iteration. These values will be used in the
predicting stage later. The learning error decreases mono-
tonically, and the algorithm converges in less than five iter-
ations.
3.4. Predicting Stage
The steps in predicting stage is very similar to the learn-
ing stage. Instead of learning the R and b parameter in each
interation, we used the parameter R and b learned from
training data. The mean face we got from training data was
used to generate the initial landmark positions in the testing
stage. Eq(2) was used to compute the predicted landmark
position. Notice that the landmark positions also need to be
normalized to a region of an unit square when calculating
the ∆x.
Algorithm 1 SDM Learning Algorithm
1: function REGRESS(imgs, marks, ans)
2: marks ← normalized(marks)
3: Φ ← Features(imgs, tmpMarks)
4: ∆x ← ans − marks
5: Φ ← AddBias(Φ)
6: R ← (ΦT
Φ)−1
ΦT
∆x
7: ∆x ← ΦR
8: marks ← marks + Remap(∆x, faceRects)
9: return R, marks
10: function SDM TRAIN(imgs, marks, ans)
11: for k = 1 : 5 do
12: if First Iteration then
13: faceRects ← FaceDetect(imgs)
14: marks ← MeanFace(faceRects)
15: [R[k], marks] ←
Regress(imgs, marks, ans)
16: return R
4. Evaluation and Results
4.1. Dataset
In our experiment, we used the LFPW dataset the same
as the one in the paper. LFPW consists of 1,432 faces from
images download from the web and each image is labeled
with 35 fiducial landmarks by three MTurk workers. The 35
fiducial landmarks mark eyebrows (4 landmarks each), eyes
(5 landmarks each), nose (4 landmarks), mouse (5 land-
marks) and chin (1 landmark). We discarded the 6 land-
marks which labeled ears since ears are often covered by
Algorithm 2 SDM Predicting Algorithm
1: function REGRESS(imgs, R, marks)
2: marks ← normalized(marks)
3: Φ ← Features(imgs, tmpMarks)
4: Φ ← AddBias(Φ)
5: ∆x ← ΦR
6: marks ← marks + Remap(∆x, faceRects)
7: return marks
8: function SDM PREDICT(imge, R)
9: for cascade = 1 : 5 do
10: if First Iteration then
11: faceRects ← FaceDetect(imgs)
12: marks ← MeanFace(faceRects)
13: marks ← Regress(imgs, R[k], marks)
14: return marks
Figure 2: 29 Labeled Landmarks
hair in our dataset; therefore, 29 landmarks were used. Fig 2
shows the 29 labeled landmarks on an example image. Due
to the copyright issue, the dataset only contains URL to the
labeled images. Most of the URLs are no longer valid. We
downloaded 527 of the 1132 training images and 146 of the
300 test images.
(a) Initial mean face (b) Predicted landmarks
Figure 5: a) The initial mean face normalized using face
detector. b) The predicted landmarks after five iterations
5
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
CVPR
#LL
CVPR
#LL
CVPR 2016 Submission #LL. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.
Figure 3: This top row shows the initial guess of the face landmarks. The bottom shows the predicted landmarks from SDM
algorithm after 5 iterations.
Figure 4: Example results on the YouTube Celebrities dataset.
4.2. Results
The author reported a mean error (×10−2
) of 3.47. Our
implementation shows a mean error(×10−2
) of 4.95. Con-
sidering that our dataset contains only one half of the im-
ages the original authors used, the difference between our
result and the original authors’ is acceptable. The smaller
dataset might make our learning model less representative
and more vulnerable to the face variance, such as the as-
pect ratio of the face, the distance between different organs,
and the orientation of the face, hence increase the alignment
error. Fig 5a shows an example of initial landmarks with
mean face normalized using the face detector and Fig 5b
shows the predicted landmarks using the SDM algorithm.
And the second row of Fig 3 demonstrates that the SDM
algorithm’s prediction is robust under different light condi-
tion, and a large variation in poses. Fig 4 shows example of
tracking result on YouTube Celebrities dataset. The result
video can be viewed at https://www.youtube.com/
watch?v=JCIR_BmhGfY. The processing time for the
SDM algorithm is around 0.055 seconds per frame.
5. Conclusion
In this paper, we summarize and compare three regres-
sion based face alignment algorithm. All three method re-
ports promising results. The first algorithm proposes a gen-
eral framework to learn regressor and the remaining two al-
gorithms extend the first one in terms of accuracy and ro-
bustness to occlusion. Our implementation of the first algo-
rithm reproduces its experiments on both image and video
data and achieves comparable result to the one reported.
References
[1] M. S. Bartlett, G. C. Littlewort, M. G. Frank, C. Lain-
scsek, I. R. Fasel, and J. R. Movellan. Automatic
recognition of facial actions in spontaneous expres-
sions. Journal of multimedia, 1(6):22–35, 2006. 1
[2] P. N. Belhumeur, D. W. Jacobs, D. J. Kriegman, and
N. Kumar. Localizing parts of faces using a consen-
sus of exemplars. Pattern Analysis and Machine In-
telligence, IEEE Transactions on, 35(12):2930–2940,
2013. 1, 2, 3
[3] M. J. Black and A. D. Jepson. Eigentracking: Robust
matching and tracking of articulated objects using a
view-based representation. International Journal of
Computer Vision, 26(1):63–84, 1998. 1
[4] L. Breiman. Random forests. Machine learning,
45(1):5–32, 2001. 2
[5] X. P. Burgos-Artizzu, P. Perona, and P. Doll´ar. Robust
face landmark estimation under occlusion. In Com-
puter Vision (ICCV), 2013 IEEE International Con-
ference on, pages 1513–1520. IEEE, 2013. 3
[6] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and
C.-J. Lin. Liblinear: A library for large linear classi-
fication. The Journal of Machine Learning Research,
9:1871–1874, 2008. 2
6
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
CVPR
#LL
CVPR
#LL
CVPR 2016 Submission #LL. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.
[7] M. Kim, S. Kumar, V. Pavlovic, and H. Rowley. Face
tracking and recognition with visual constraints in
real-world videos. In Computer Vision and Pattern
Recognition, 2008. CVPR 2008. IEEE Conference on,
pages 1–8. IEEE, 2008. 1
[8] V. Le, J. Brandt, Z. Lin, L. Bourdev, and T. S. Huang.
Interactive facial feature localization. In Computer
Vision–ECCV 2012, pages 679–692. Springer, 2012.
2, 3
[9] D. G. Lowe. Object recognition from local scale-
invariant features. In Computer vision, 1999. The pro-
ceedings of the seventh IEEE international conference
on, volume 2, pages 1150–1157. Ieee, 1999. 1
[10] S. Ren, X. Cao, Y. Wei, and J. Sun. Face alignment at
3000 fps via regressing local binary features. In Com-
puter Vision and Pattern Recognition (CVPR), 2014
IEEE Conference on, pages 1685–1692. IEEE, 2014.
1, 2
[11] C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, and
M. Pantic. A semi-automatic methodology for facial
landmark annotation. In Computer Vision and Pattern
Recognition Workshops (CVPRW), 2013 IEEE Con-
ference on, pages 896–903. IEEE, 2013. 2
[12] J. Saragih. Principal regression analysis. In Computer
Vision and Pattern Recognition (CVPR), 2011 IEEE
Conference on, pages 2881–2888. IEEE, 2011. 1, 2, 3
[13] Y. Wu and Q. Ji. Robust facial landmark detection
under significant head poses and occlusion. In Proc.
Int. Conf. Comput. Vision. IEEE, volume 1, 2015. 1, 2
[14] X. Xiong and F. De la Torre. Supervised descent
method and its applications to face alignment. In Com-
puter Vision and Pattern Recognition (CVPR), 2013
IEEE Conference on, pages 532–539. IEEE, 2013. 1
7

More Related Content

What's hot

Branch and bound technique
Branch and bound techniqueBranch and bound technique
Branch and bound techniqueishmecse13
 
Miniproject final group 14
Miniproject final group 14Miniproject final group 14
Miniproject final group 14Ashish Mundhra
 
Contour Line Tracing Algorithm for Digital Topographic Maps
Contour Line Tracing Algorithm for Digital Topographic MapsContour Line Tracing Algorithm for Digital Topographic Maps
Contour Line Tracing Algorithm for Digital Topographic MapsCSCJournals
 
Real-Time Multiple License Plate Recognition System
Real-Time Multiple License Plate Recognition SystemReal-Time Multiple License Plate Recognition System
Real-Time Multiple License Plate Recognition SystemIJORCS
 
Numerical disperison analysis of sympletic and adi scheme
Numerical disperison analysis of sympletic and adi schemeNumerical disperison analysis of sympletic and adi scheme
Numerical disperison analysis of sympletic and adi schemexingangahu
 
facility layout paper
 facility layout paper facility layout paper
facility layout paperSaurabh Tiwary
 
Double-constrained RPCA based on Saliency Maps for Foreground Detection in Au...
Double-constrained RPCA based on Saliency Maps for Foreground Detection in Au...Double-constrained RPCA based on Saliency Maps for Foreground Detection in Au...
Double-constrained RPCA based on Saliency Maps for Foreground Detection in Au...ActiveEon
 
Help the Genetic Algorithm to Minimize the Urban Traffic on Intersections
Help the Genetic Algorithm to Minimize the Urban Traffic on IntersectionsHelp the Genetic Algorithm to Minimize the Urban Traffic on Intersections
Help the Genetic Algorithm to Minimize the Urban Traffic on IntersectionsIJORCS
 
Basics of Robotics
Basics of RoboticsBasics of Robotics
Basics of Robotics홍배 김
 
A computationally efficient method to find transformed residue
A computationally efficient method to find transformed residueA computationally efficient method to find transformed residue
A computationally efficient method to find transformed residueiaemedu
 
(CVPR2021 Oral) RobustNet: Improving Domain Generalization in Urban-Scene Seg...
(CVPR2021 Oral) RobustNet: Improving Domain Generalization in Urban-Scene Seg...(CVPR2021 Oral) RobustNet: Improving Domain Generalization in Urban-Scene Seg...
(CVPR2021 Oral) RobustNet: Improving Domain Generalization in Urban-Scene Seg...Sungha Choi
 
Dynamic Path Planning
Dynamic Path PlanningDynamic Path Planning
Dynamic Path Planningdare2kreate
 
Hidden Markov Random Fields and Direct Search Methods for Medical Image Segme...
Hidden Markov Random Fields and Direct Search Methods for Medical Image Segme...Hidden Markov Random Fields and Direct Search Methods for Medical Image Segme...
Hidden Markov Random Fields and Direct Search Methods for Medical Image Segme...EL-Hachemi Guerrout
 
A Path Planning Technique For Autonomous Mobile Robot Using Free-Configuratio...
A Path Planning Technique For Autonomous Mobile Robot Using Free-Configuratio...A Path Planning Technique For Autonomous Mobile Robot Using Free-Configuratio...
A Path Planning Technique For Autonomous Mobile Robot Using Free-Configuratio...CSCJournals
 
PhD Thesis Defense Presentation: Robust Low-rank and Sparse Decomposition for...
PhD Thesis Defense Presentation: Robust Low-rank and Sparse Decomposition for...PhD Thesis Defense Presentation: Robust Low-rank and Sparse Decomposition for...
PhD Thesis Defense Presentation: Robust Low-rank and Sparse Decomposition for...ActiveEon
 
Path Planning And Navigation
Path Planning And NavigationPath Planning And Navigation
Path Planning And Navigationguest90654fd
 

What's hot (20)

Branch and bound technique
Branch and bound techniqueBranch and bound technique
Branch and bound technique
 
Miniproject final group 14
Miniproject final group 14Miniproject final group 14
Miniproject final group 14
 
Contour Line Tracing Algorithm for Digital Topographic Maps
Contour Line Tracing Algorithm for Digital Topographic MapsContour Line Tracing Algorithm for Digital Topographic Maps
Contour Line Tracing Algorithm for Digital Topographic Maps
 
[IJCT-V3I2P24] Authors: Osama H. Abdelwahed; and M. El-Sayed Wahed
[IJCT-V3I2P24] Authors: Osama H. Abdelwahed; and M. El-Sayed Wahed[IJCT-V3I2P24] Authors: Osama H. Abdelwahed; and M. El-Sayed Wahed
[IJCT-V3I2P24] Authors: Osama H. Abdelwahed; and M. El-Sayed Wahed
 
Real-Time Multiple License Plate Recognition System
Real-Time Multiple License Plate Recognition SystemReal-Time Multiple License Plate Recognition System
Real-Time Multiple License Plate Recognition System
 
Iaetsd 128-bit area
Iaetsd 128-bit areaIaetsd 128-bit area
Iaetsd 128-bit area
 
Numerical disperison analysis of sympletic and adi scheme
Numerical disperison analysis of sympletic and adi schemeNumerical disperison analysis of sympletic and adi scheme
Numerical disperison analysis of sympletic and adi scheme
 
facility layout paper
 facility layout paper facility layout paper
facility layout paper
 
Double-constrained RPCA based on Saliency Maps for Foreground Detection in Au...
Double-constrained RPCA based on Saliency Maps for Foreground Detection in Au...Double-constrained RPCA based on Saliency Maps for Foreground Detection in Au...
Double-constrained RPCA based on Saliency Maps for Foreground Detection in Au...
 
Help the Genetic Algorithm to Minimize the Urban Traffic on Intersections
Help the Genetic Algorithm to Minimize the Urban Traffic on IntersectionsHelp the Genetic Algorithm to Minimize the Urban Traffic on Intersections
Help the Genetic Algorithm to Minimize the Urban Traffic on Intersections
 
Basics of Robotics
Basics of RoboticsBasics of Robotics
Basics of Robotics
 
Lecture 14
Lecture 14Lecture 14
Lecture 14
 
A computationally efficient method to find transformed residue
A computationally efficient method to find transformed residueA computationally efficient method to find transformed residue
A computationally efficient method to find transformed residue
 
(CVPR2021 Oral) RobustNet: Improving Domain Generalization in Urban-Scene Seg...
(CVPR2021 Oral) RobustNet: Improving Domain Generalization in Urban-Scene Seg...(CVPR2021 Oral) RobustNet: Improving Domain Generalization in Urban-Scene Seg...
(CVPR2021 Oral) RobustNet: Improving Domain Generalization in Urban-Scene Seg...
 
Dynamic Path Planning
Dynamic Path PlanningDynamic Path Planning
Dynamic Path Planning
 
Hidden Markov Random Fields and Direct Search Methods for Medical Image Segme...
Hidden Markov Random Fields and Direct Search Methods for Medical Image Segme...Hidden Markov Random Fields and Direct Search Methods for Medical Image Segme...
Hidden Markov Random Fields and Direct Search Methods for Medical Image Segme...
 
A Path Planning Technique For Autonomous Mobile Robot Using Free-Configuratio...
A Path Planning Technique For Autonomous Mobile Robot Using Free-Configuratio...A Path Planning Technique For Autonomous Mobile Robot Using Free-Configuratio...
A Path Planning Technique For Autonomous Mobile Robot Using Free-Configuratio...
 
PhD Thesis Defense Presentation: Robust Low-rank and Sparse Decomposition for...
PhD Thesis Defense Presentation: Robust Low-rank and Sparse Decomposition for...PhD Thesis Defense Presentation: Robust Low-rank and Sparse Decomposition for...
PhD Thesis Defense Presentation: Robust Low-rank and Sparse Decomposition for...
 
R01741124127
R01741124127R01741124127
R01741124127
 
Path Planning And Navigation
Path Planning And NavigationPath Planning And Navigation
Path Planning And Navigation
 

Viewers also liked

AnáLise De Dados
AnáLise De DadosAnáLise De Dados
AnáLise De DadosCadu Gomes
 
παιχνίδια αρχ4
παιχνίδια αρχ4παιχνίδια αρχ4
παιχνίδια αρχ4katinamassyndeei
 
Git - Rápido, seguro, eficiente
Git - Rápido, seguro, eficienteGit - Rápido, seguro, eficiente
Git - Rápido, seguro, eficienteWaldyr Felix
 
Café22: Just Like Music
Café22: Just Like MusicCafé22: Just Like Music
Café22: Just Like MusicRamon Bispo
 
Jugos
JugosJugos
Jugosdgcye
 
ASP.NET MVC, para sua vida melhorar
ASP.NET MVC, para sua vida melhorarASP.NET MVC, para sua vida melhorar
ASP.NET MVC, para sua vida melhorarWaldyr Felix
 
Agenda de la UIMP, Viernes 7 de Agosto de 2009
Agenda de la UIMP, Viernes 7 de Agosto de 2009Agenda de la UIMP, Viernes 7 de Agosto de 2009
Agenda de la UIMP, Viernes 7 de Agosto de 2009guesteb997e
 

Viewers also liked (13)

AnáLise De Dados
AnáLise De DadosAnáLise De Dados
AnáLise De Dados
 
παιχνίδια αρχ4
παιχνίδια αρχ4παιχνίδια αρχ4
παιχνίδια αρχ4
 
Git - Rápido, seguro, eficiente
Git - Rápido, seguro, eficienteGit - Rápido, seguro, eficiente
Git - Rápido, seguro, eficiente
 
Café22: Just Like Music
Café22: Just Like MusicCafé22: Just Like Music
Café22: Just Like Music
 
Amizade
AmizadeAmizade
Amizade
 
Jugos
JugosJugos
Jugos
 
ASP.NET MVC, para sua vida melhorar
ASP.NET MVC, para sua vida melhorarASP.NET MVC, para sua vida melhorar
ASP.NET MVC, para sua vida melhorar
 
E-ained
E-ainedE-ained
E-ained
 
TopGear Korea
TopGear KoreaTopGear Korea
TopGear Korea
 
Portafolio de servicio del sistema de bibliotecas sena
Portafolio de servicio del sistema de bibliotecas senaPortafolio de servicio del sistema de bibliotecas sena
Portafolio de servicio del sistema de bibliotecas sena
 
Mercados verdes
Mercados verdesMercados verdes
Mercados verdes
 
Eugostodevc
EugostodevcEugostodevc
Eugostodevc
 
Agenda de la UIMP, Viernes 7 de Agosto de 2009
Agenda de la UIMP, Viernes 7 de Agosto de 2009Agenda de la UIMP, Viernes 7 de Agosto de 2009
Agenda de la UIMP, Viernes 7 de Agosto de 2009
 

Similar to computervision project

Local Binary Fitting Segmentation by Cooperative Quantum Particle Optimization
Local Binary Fitting Segmentation by Cooperative Quantum Particle OptimizationLocal Binary Fitting Segmentation by Cooperative Quantum Particle Optimization
Local Binary Fitting Segmentation by Cooperative Quantum Particle OptimizationTELKOMNIKA JOURNAL
 
Improved Characters Feature Extraction and Matching Algorithm Based on SIFT
Improved Characters Feature Extraction and Matching Algorithm Based on SIFTImproved Characters Feature Extraction and Matching Algorithm Based on SIFT
Improved Characters Feature Extraction and Matching Algorithm Based on SIFTNooria Sukmaningtyas
 
Survey on Single image Super Resolution Techniques
Survey on Single image Super Resolution TechniquesSurvey on Single image Super Resolution Techniques
Survey on Single image Super Resolution TechniquesIOSR Journals
 
Survey on Single image Super Resolution Techniques
Survey on Single image Super Resolution TechniquesSurvey on Single image Super Resolution Techniques
Survey on Single image Super Resolution TechniquesIOSR Journals
 
Performance Comparison of PCA,DWT-PCA And LWT-PCA for Face Image Retrieval
Performance Comparison of PCA,DWT-PCA And LWT-PCA for Face Image RetrievalPerformance Comparison of PCA,DWT-PCA And LWT-PCA for Face Image Retrieval
Performance Comparison of PCA,DWT-PCA And LWT-PCA for Face Image RetrievalCSEIJJournal
 
Performance Comparison of PCA,DWT-PCA And LWT-PCA for Face Image Retrieval
Performance Comparison of PCA,DWT-PCA And LWT-PCA for Face Image RetrievalPerformance Comparison of PCA,DWT-PCA And LWT-PCA for Face Image Retrieval
Performance Comparison of PCA,DWT-PCA And LWT-PCA for Face Image RetrievalCSEIJJournal
 
Path Loss Prediction by Robust Regression Methods
Path Loss Prediction by Robust Regression MethodsPath Loss Prediction by Robust Regression Methods
Path Loss Prediction by Robust Regression Methodsijceronline
 
Face recognition using laplacianfaces (synopsis)
Face recognition using laplacianfaces (synopsis)Face recognition using laplacianfaces (synopsis)
Face recognition using laplacianfaces (synopsis)Mumbai Academisc
 
research journal
research journalresearch journal
research journalakhila1001
 
Joint Image Registration And Example-Based Super-Resolution Algorithm
Joint Image Registration And Example-Based Super-Resolution AlgorithmJoint Image Registration And Example-Based Super-Resolution Algorithm
Joint Image Registration And Example-Based Super-Resolution Algorithmaciijournal
 
A Robust Method Based On LOVO Functions For Solving Least Squares Problems
A Robust Method Based On LOVO Functions For Solving Least Squares ProblemsA Robust Method Based On LOVO Functions For Solving Least Squares Problems
A Robust Method Based On LOVO Functions For Solving Least Squares ProblemsDawn Cook
 
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHM
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHMA ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHM
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHMcsandit
 
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLING
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLINGAPPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLING
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLINGsipij
 
4 satellite image fusion using fast discrete
4 satellite image fusion using fast discrete4 satellite image fusion using fast discrete
4 satellite image fusion using fast discreteAlok Padole
 
SLAM of Multi-Robot System Considering Its Network Topology
SLAM of Multi-Robot System Considering Its Network TopologySLAM of Multi-Robot System Considering Its Network Topology
SLAM of Multi-Robot System Considering Its Network Topologytoukaigi
 
MODIFIED LLL ALGORITHM WITH SHIFTED START COLUMN FOR COMPLEXITY REDUCTION
MODIFIED LLL ALGORITHM WITH SHIFTED START COLUMN FOR COMPLEXITY REDUCTIONMODIFIED LLL ALGORITHM WITH SHIFTED START COLUMN FOR COMPLEXITY REDUCTION
MODIFIED LLL ALGORITHM WITH SHIFTED START COLUMN FOR COMPLEXITY REDUCTIONijwmn
 
Feature extraction based retrieval of
Feature extraction based retrieval ofFeature extraction based retrieval of
Feature extraction based retrieval ofijcsity
 
Drobics, m. 2001: datamining using synergiesbetween self-organising maps and...
Drobics, m. 2001:  datamining using synergiesbetween self-organising maps and...Drobics, m. 2001:  datamining using synergiesbetween self-organising maps and...
Drobics, m. 2001: datamining using synergiesbetween self-organising maps and...ArchiLab 7
 

Similar to computervision project (20)

Local Binary Fitting Segmentation by Cooperative Quantum Particle Optimization
Local Binary Fitting Segmentation by Cooperative Quantum Particle OptimizationLocal Binary Fitting Segmentation by Cooperative Quantum Particle Optimization
Local Binary Fitting Segmentation by Cooperative Quantum Particle Optimization
 
Improved Characters Feature Extraction and Matching Algorithm Based on SIFT
Improved Characters Feature Extraction and Matching Algorithm Based on SIFTImproved Characters Feature Extraction and Matching Algorithm Based on SIFT
Improved Characters Feature Extraction and Matching Algorithm Based on SIFT
 
Survey on Single image Super Resolution Techniques
Survey on Single image Super Resolution TechniquesSurvey on Single image Super Resolution Techniques
Survey on Single image Super Resolution Techniques
 
Survey on Single image Super Resolution Techniques
Survey on Single image Super Resolution TechniquesSurvey on Single image Super Resolution Techniques
Survey on Single image Super Resolution Techniques
 
Performance Comparison of PCA,DWT-PCA And LWT-PCA for Face Image Retrieval
Performance Comparison of PCA,DWT-PCA And LWT-PCA for Face Image RetrievalPerformance Comparison of PCA,DWT-PCA And LWT-PCA for Face Image Retrieval
Performance Comparison of PCA,DWT-PCA And LWT-PCA for Face Image Retrieval
 
Performance Comparison of PCA,DWT-PCA And LWT-PCA for Face Image Retrieval
Performance Comparison of PCA,DWT-PCA And LWT-PCA for Face Image RetrievalPerformance Comparison of PCA,DWT-PCA And LWT-PCA for Face Image Retrieval
Performance Comparison of PCA,DWT-PCA And LWT-PCA for Face Image Retrieval
 
Path Loss Prediction by Robust Regression Methods
Path Loss Prediction by Robust Regression MethodsPath Loss Prediction by Robust Regression Methods
Path Loss Prediction by Robust Regression Methods
 
Ak03302260233
Ak03302260233Ak03302260233
Ak03302260233
 
Face recognition using laplacianfaces (synopsis)
Face recognition using laplacianfaces (synopsis)Face recognition using laplacianfaces (synopsis)
Face recognition using laplacianfaces (synopsis)
 
research journal
research journalresearch journal
research journal
 
Joint Image Registration And Example-Based Super-Resolution Algorithm
Joint Image Registration And Example-Based Super-Resolution AlgorithmJoint Image Registration And Example-Based Super-Resolution Algorithm
Joint Image Registration And Example-Based Super-Resolution Algorithm
 
A Robust Method Based On LOVO Functions For Solving Least Squares Problems
A Robust Method Based On LOVO Functions For Solving Least Squares ProblemsA Robust Method Based On LOVO Functions For Solving Least Squares Problems
A Robust Method Based On LOVO Functions For Solving Least Squares Problems
 
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHM
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHMA ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHM
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHM
 
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLING
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLINGAPPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLING
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLING
 
4 satellite image fusion using fast discrete
4 satellite image fusion using fast discrete4 satellite image fusion using fast discrete
4 satellite image fusion using fast discrete
 
SLAM of Multi-Robot System Considering Its Network Topology
SLAM of Multi-Robot System Considering Its Network TopologySLAM of Multi-Robot System Considering Its Network Topology
SLAM of Multi-Robot System Considering Its Network Topology
 
MODIFIED LLL ALGORITHM WITH SHIFTED START COLUMN FOR COMPLEXITY REDUCTION
MODIFIED LLL ALGORITHM WITH SHIFTED START COLUMN FOR COMPLEXITY REDUCTIONMODIFIED LLL ALGORITHM WITH SHIFTED START COLUMN FOR COMPLEXITY REDUCTION
MODIFIED LLL ALGORITHM WITH SHIFTED START COLUMN FOR COMPLEXITY REDUCTION
 
reportVPLProject
reportVPLProjectreportVPLProject
reportVPLProject
 
Feature extraction based retrieval of
Feature extraction based retrieval ofFeature extraction based retrieval of
Feature extraction based retrieval of
 
Drobics, m. 2001: datamining using synergiesbetween self-organising maps and...
Drobics, m. 2001:  datamining using synergiesbetween self-organising maps and...Drobics, m. 2001:  datamining using synergiesbetween self-organising maps and...
Drobics, m. 2001: datamining using synergiesbetween self-organising maps and...
 

computervision project

  • 1. 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 CVPR #LL CVPR #LL CVPR 2016 Submission #LL. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE. A Study on Face Alignment Algorithms Anonymous CVPR submission Paper ID LL Abstract This paper studies recent development in face align- ment. We summarize and critique three regression based face alignment algorithms. We also implement the super- vised descent method. Our implementation gives reason- able results compared to the one reported by the author. 1. Introduction Face alignment, also known as facial feature detection, refers to locating p facial landmarks x ∈ Rp (such as eye brows and corners of mouth) in a face image I ∈ Rm of m pixels. Recently, regression based face alignment is showing promising results. Basically, those methods try to find a series of regression matrix Rk , k = 1, 2, · · · that maps the image data to the movement of landmarks ∆xk such the initial landmarks x0 are refined progressively by xk = xk−1 + ∆xk to find the optimal landmarks loca- tions x∗ . In this paper, we review three regression based methods[14][10][13] and implement one of them[14]. 2. Summarize and Critique 2.1. SDM:Supervised descent method 2.1.1 Problem formulation and algorithm Supervised descent method[14] formulates the face align- ment problem as a minimization problem ∆x∗ = argmin ∆x f(∆x) = argmin ∆x h(I(x0 + ∆x)) − φ∗ 2 2 (1) where h(I(x)) is the SIFT[9] feature extraction function that calculates feature values of image I at landmarks x and φ∗ = h(I(x∗ )) denotes the SIFT values at the ground truth landmarks x∗ . One classic method to solve minimization problems is the Newton’s method [3]. Yet Newton’s method requires to compute the inverse of the Hessian of the function, which is not only computational expensive for large data size but also infeasible for non-differentiable functions, such as the SIFT operator here. To overcome the limitation of Newton’s method, in- stead of calculating the descent direction using the Hessian, supervised descent method learns such descent directions from training data. More specifically, the algorithm learns a sequence of mapping matrix Rk that maps the local fea- tures φ at landmark x to the motion of landmarks ∆x from training dataset {Ii} with known landmarks {x∗ i } argmin Rk,bk Ii xi k ∆xki ∗ − Rk φk i − bk 2 (2) and updates the location of landmarks by xk = xk−1 + Rk−1 φk−1 + bk−1 (3) where ∆xki ∗ = x∗ i − xki is the displacement between the ground truth landmarks and the estimated landmarks at the kth stage for image i. 2.1.2 Evaluation and results The algorithm is first validated on simple analytic functions and compared to Newtons method. Then the algorithm was tested on the LFPW datasets[2][12] and evaluated by the distance between the landmarks found by the algorithm and the ground truth landmarks provided. Finally the algorithm was tested for facial feature tracking on video dataset[1][7] where the algorithm tries to detect facial landmarks in each frame. Validation on analytic function shows the algorithm con- verges faster than Newton’s method experimentally. Re- sults on the LFPW dataset show the supervised descent method outperforms two other recently proposed meth- ods and the algorithm is also reported to work well on video data (though no quantitative results are provided). The method can handle images with illumination and pose changes pretty well. Yet for face images with extreme posi- tion change (such as a side face) and sever occlusion, the al- gorithm shows large error as compared to the ground truth. 1
  • 2. 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 CVPR #LL CVPR #LL CVPR 2016 Submission #LL. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE. 2.2. LBF:Local binary features regression 2.2.1 Problem formulation and algorithm SDM uses the classic SIFT method for feature extraction. LBF[10] proposes to improve the accuracy of face align- ment by learning local features from training data instead. The argument is that such features are more adaptive to specific task. By learning features from a local region the method can take advantage of more discriminative features and save computational effort by learning each feature in- dependently in a local region. The noises in the learned fea- tures are suppressed by using a global regression that maps all learned features together to give the optimal motion of landmarks, To learn for the local features, the method takes a region around the lth landmarks and learns the local features φl in the region by solving argmin Rk l ,φk l Ii πl ◦ ∆xki ∗ − Rk l φk l (Ii, xk i ) 2 2 (4) where πl◦ denotes the operator that gets the l th element of ∆xki ∗ . Comparing to Eq(2) in the supervised descent method, this method jointly learns a local feature φk l and a local mapping Rk l instead of using SIFT to calculate φk for all landmark locations and learning a global mapping Rk directly. The author solves the learning problem in Eq(4) using the standard regression random forest[4]. During testing, each sample will traverse the trees until it reaches on leaf node for each tree. Each dimension in φk l is 1 if the sam- ple reaches the corresponding leaf node and 0 otherwise, resulting in a binary local feature that is highly sparse. To suppress noise in the local feature, the algorithm dis- cards the local mapping function Rk l learns in Eq(4) and instead learns a global mapping Rk by solving the problem that is similar with Eq(2), except that the features used is the learned local binary features rather than SIFT features and a L2 regularization on Rk is needed as the learned local fea- tures have a dimension much higher than SIFT. However, due to the sparse nature of the features, the problem can be solved easily using a dual coordinate descent method[6]. 2.2.2 Evaluation and results The algorithm is evaluated on the LFPW datasets [2][12]the same as the one used in SDM and two other datasets, ”Hellen”[8] and ”300-W”[11] that have richer images and landmarks. The results show that the algorithm achieves comparable accuracy on the first two datasets as compared to other methods including SDM and higher accuracy on the third dataset which includes the first two as well as other challenging images. The paper shows sample results of both SDM and LBF. SDM fails on some images due to large pose variation while LBF detects most landmarks correctly. However, for images with poor illumination condition or unusual expressions (such as a widely opened mouth), both SDM and LBF fail. Also, the speed of different algorithms is compared and the proposed method is the fastest method that runs at 300+ frames per second (FPS). The author further evaluates a faster version with smaller feature size which runs at 3000 FPS while achieving comparable accuracy as compared to other methods. 2.3. HPO:Face alignment under poses and occlusion Both SDM and LBF treat all landmarks equally without considering occlusions and head poses explicitly. To han- dle occlusion, HPO [13] proposes to take the visibility of landmarks into account and learns a model to predict the probability of landmark visibility. Such model is then used to assist the face alignment by adding the visibility infor- mation to local features and shape configuration around a landmark.It also proposes to unify pose variation and oc- clusion by treating invisible landmarks due to pose change as a special case of occlusion. First, the algorithm learns a function from training dataset as well as synthetic data that gives a prior proba- bility of possible occlusion patterns. For example, occlu- sion usually happens in a block manner (Fig 1 a) where an entire block of the face is missing while it is very unlikely to have a face image where every other pixel is occluded (like a checkboard in Fig 1 b ). Fig 1 shows example syn- thetic data that models possible occlusion patterns and an- other less likely block patterns. The probability function Loss(c), where c represents the probability of each pos- sible occlusion patterns to occur in an image of m pixels and it is a vector of length 2m . After learning Loss(c), the probability of landmark visibility is learned by optimizing argmin ∆pk,T k ∆pk − Tk φ(I, xk−1 ) 2 2 + λEpk [Loss(c)] s.t. pk = pk−1 + ∆pk (5) The algorithm solves the problem in Eq(5) iteratively by optimizing the objective function with respect to ∆p and T. Solving for T is a simple least square problem while solving for ∆p is more complicated and gradient descent method is used. Given the predicted landmark visibility probabilities, the algorithm updates the landmark locations by learning the descent direction Rk that maps local image features to movement of landmarks. For occluded landmarks, the re- gion around them is not facial part thus the local appearance around them should be less useful in predicting landmark locations. Therefore, the local features around landmark xk 2
  • 3. 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 CVPR #LL CVPR #LL CVPR 2016 Submission #LL. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE. (a) Natural Occlusion (b) Checkboard-like Occlusion Figure 1: Example synthetic data shows different occlusion patterns (gray blocks). Small numbers indicate facial land- marks. is weighed by the corresponding visibility probability pk . Mathematically, this modifies Eq(2) to argmin Rk,bk Ii xi k ∆xki ∗ − Rk [pk φk i ] 2 (6) where the operator denotes the product between the visi- bility probability and the feature vector around each of the landmark. 2.3.1 Evaluation and Results The algorithm was evaluated on three datasets, the LFPW dataset[2][12] which is used by both SDM and LBF, the Helen dataset[8] used by LBF and a new dataset that has more occluded faces and larger pose variations[5]. The pro- posed algorithm achieves comparable accuracy in terms of the distance between ground truth and detected landmarks on the first two datasets while outperforming SDM and LBF on the third dataset. In general, the algorithm is able to detect occluded landmarks and handle pose variance better than SDM . However, the algorithm still fails for extreme head poses. Besides, example images of occlusion detec- tion shown in the paper all have a nearly fronted pose, which should reduce the complexity of occlusion detection. These observations suggest that the power of the occlusion model- ing and the idea of viewing side faces as a case of occlusion is still limited. In terms of computational efficiency, the algorithm is slower than the previous two methods. The proposed al- gorithm runs at 2 FPS while the SDM and LBF runs at 160 and 460 FPS respectively on the LFPW data base. 2.4. Discussion and Comparison The three algorithms all aim at solving face alignment problem using regression method and achieve promising re- sults. Table 1 summarizes the quantitative results (mean ab- solute error/FPS) obtained by the 3 algorithms on different datasets, ”NA” indicates no result reported. LFPW Helen 300-W COFW SDM 3.49/160 5.85/21 7.52/70 6.69/NA LBF 3.35/460 5.41/200 6.32/320 NA HPO 3.93/2 5.49/2 NA 5.18/2 Table 1: Summary of quantitative results of different algo- rithms SPM proposes a general frame work to solve face align- ment as an optimization problem without calculating the in- verse of Hessian. The method is general to handle different feature extraction functions as it does not impose any spe- cific requirement such as differentiability. Also, the learn- ing process requires solving a simple least square problem only and appears to have a very fast convergence rate (4 or 5 steps) experimentally. However, the underlying assumption for Rk to be the optimal descent directions is that the entire dataset repre- sents a set of similar functions with similar descent direc- tions. Under this assumption, it is possible to average over the training dataset and learn the descent directions that are optimal for both training and testing data. If this assump- tion does not hold, the algorithm can fail easily. Consider a simple case where we sample two points of a function that have opposite descent direction as training samples, aver- aging over them will learn a 0 direction that leads the opti- mization no where. This should also be the reason that the algorithm fails on faces with extreme poses. Therefore, it is very important to preprocess the dataset to make sure the dataset is homogeneous. Details of such preprocessing will be described in the experimental section. LBF extends the SDM by learning local binary features, which contributes to both computational efficiency and ac- curacy of the algorithm. On the other hand, learning local features also adds on model complexity and requires more parameter tuning, such as the size of the local region for the local feature learning process, which varies at differ- ent training stages. The author proposes to perform cross- validation to determine the optimal model parameters yet it is not clear in the paper how stable the parameters are across different datasets and how robust the method is to different parameters. HPO aims at resolving variation of head poses and oc- clusions, which is a major challenge in face alignment and proposes to view the two problems in a unified way. The results show the method’s efficacy in detecting occluded points and various poses yet the power still seems limited for some extreme cases. 3
  • 4. 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 CVPR #LL CVPR #LL CVPR 2016 Submission #LL. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE. 3. Implementation of SDM 3.1. Face Detection with OpenCV SDM algorithm requires predefined face location. The face is detected with OpenCV’s Haar featured-based cas- cade classifier. The face detection rate is 82.3% on LFPW dataset. We discarded those images that cannot be detected and ended up with 434 training images and 120 test im- ages. The output from OpenCV’s face detector is a rectan- gle which contains the region of the face. The face rectangle will later be used as a reference to normalize the dataset and initialize the algorithm. 3.2. Calculate Mean Face The face detection information gathered from the previ- ous step is used to center and normalize the face images as well as the labeled landmarks. We normalized the face rectangle to 200 pixels by 200 pixels and take 100 pixels of padding from each side, since some of the labeled land- marks is outside the face rectangle. We then used the nor- malized landmarks position to generate the initial mean face by calculating the mean position value of each landmark from all images in the training dataset. This initial mean face will be used in the initialization step of both learning and predicting stage. 3.3. Learning Stage 3.3.1 Aligned Landmarks with the Face Position The first step of SDM is generating the initial estimation of facial landmarks based on the mean face shape we calcu- lated in previous step and the variance of scale and trans- lation of training face images. In our implementation, we first resized the image based on the face size detected by OpenCVs face detector; therefore initial face shape will be the same across different images. 3.3.2 Extract HoG features from Landmarks The next step is extracting the HoG features from patches around the landmarks with size 32 pixels by 32 pixels. The HoG features is equivalent to the SIFT features of fixed scale and predefined local patch as described in the origi- nal paper. The HoG features Given an image d ∈ m×1 of m pixels and x ∈ p×1 indexes p landmarks in the im- age, we generated a feature vector φ(di (x)) ∈ 128×1 per landmark from the patch. The generated features for each landmarks of the image is than concatenate into a vector φ(di (x)) ∈ 128p×1 , where p is the number of landmarks. For the purpose of matrix calculation in MATLAB, we con- catenate all feature vectors in the training dataset into a ma- trix Φ ∈ n×128p , where n is the number of images in train- ing dataset and each row of the matrix is the transpose of corresponding feature vector of that image. Φ =          φ(d1 (x))T φ(d2 (x))T φ(d3 (x))T ... φ(dn−1 (x))T φ(dn (x))T          (7) 3.3.3 Perform SDM Regression The learning for SDM follows the Eq(2) and Eq(3). The landmarks are predicted by a generic linear combination. The SDM will learn a sequence of generic descent direction Rk and bias term bk. To learn the generic descent direction Rk and bias term bk, we minimized Eq(2), which is a well- known linear least squares problem which can be solved in closed form. To reformulate the equation in a matrix form, we got D = ΦA (8) A = bk Rk (9) D =          ∆x1 ∆x2 ∆x3 ... ∆xn−1 ∆xn          , Φ =          1 φ1 1 φ2 1 φ3 ... ... 1 φn−1 1 φn          (10) where D ∈ n×2p , Φ ∈ n×(128p+1) and A ∈ (128p+1)×2p . The dimension of D is n × 2p is because we concatenate the x and y coordinate of the landmarks into a row vectors, thus the length becomes two times the number of landmarks. The close form solution for Eq(8) is A = (ΦT Φ)−1 ΦT D (11) One important implementation detail to notice is that to perform the learning, the landmarks position need to be normalized to a region of an unit square, i.e.∆ˆxk = ∆xk /s,where s is calculated as the distance between the minimal and maximal coordinates among all landmarks xk . 3.3.4 Recalculate the updated landmarks After we learned the new Rk and bk , we can use these pa- rameters to predict the new landmarks position by comput- ing Eq(3). Since we normalized the landmark position to a region of an unit square, we need to remapping the ∆xk from Eq (1). back to its original dimension. 4
  • 5. 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 CVPR #LL CVPR #LL CVPR 2016 Submission #LL. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE. 3.3.5 Iterate the Learning Process The new landmark position we got from 3.3.4 are now the new initial land marks, and we can repeat the learning pro- cess from 3.3.2 to 3.3.4 several times, and store the Rk and bk value in each iteration. These values will be used in the predicting stage later. The learning error decreases mono- tonically, and the algorithm converges in less than five iter- ations. 3.4. Predicting Stage The steps in predicting stage is very similar to the learn- ing stage. Instead of learning the R and b parameter in each interation, we used the parameter R and b learned from training data. The mean face we got from training data was used to generate the initial landmark positions in the testing stage. Eq(2) was used to compute the predicted landmark position. Notice that the landmark positions also need to be normalized to a region of an unit square when calculating the ∆x. Algorithm 1 SDM Learning Algorithm 1: function REGRESS(imgs, marks, ans) 2: marks ← normalized(marks) 3: Φ ← Features(imgs, tmpMarks) 4: ∆x ← ans − marks 5: Φ ← AddBias(Φ) 6: R ← (ΦT Φ)−1 ΦT ∆x 7: ∆x ← ΦR 8: marks ← marks + Remap(∆x, faceRects) 9: return R, marks 10: function SDM TRAIN(imgs, marks, ans) 11: for k = 1 : 5 do 12: if First Iteration then 13: faceRects ← FaceDetect(imgs) 14: marks ← MeanFace(faceRects) 15: [R[k], marks] ← Regress(imgs, marks, ans) 16: return R 4. Evaluation and Results 4.1. Dataset In our experiment, we used the LFPW dataset the same as the one in the paper. LFPW consists of 1,432 faces from images download from the web and each image is labeled with 35 fiducial landmarks by three MTurk workers. The 35 fiducial landmarks mark eyebrows (4 landmarks each), eyes (5 landmarks each), nose (4 landmarks), mouse (5 land- marks) and chin (1 landmark). We discarded the 6 land- marks which labeled ears since ears are often covered by Algorithm 2 SDM Predicting Algorithm 1: function REGRESS(imgs, R, marks) 2: marks ← normalized(marks) 3: Φ ← Features(imgs, tmpMarks) 4: Φ ← AddBias(Φ) 5: ∆x ← ΦR 6: marks ← marks + Remap(∆x, faceRects) 7: return marks 8: function SDM PREDICT(imge, R) 9: for cascade = 1 : 5 do 10: if First Iteration then 11: faceRects ← FaceDetect(imgs) 12: marks ← MeanFace(faceRects) 13: marks ← Regress(imgs, R[k], marks) 14: return marks Figure 2: 29 Labeled Landmarks hair in our dataset; therefore, 29 landmarks were used. Fig 2 shows the 29 labeled landmarks on an example image. Due to the copyright issue, the dataset only contains URL to the labeled images. Most of the URLs are no longer valid. We downloaded 527 of the 1132 training images and 146 of the 300 test images. (a) Initial mean face (b) Predicted landmarks Figure 5: a) The initial mean face normalized using face detector. b) The predicted landmarks after five iterations 5
  • 6. 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 CVPR #LL CVPR #LL CVPR 2016 Submission #LL. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE. Figure 3: This top row shows the initial guess of the face landmarks. The bottom shows the predicted landmarks from SDM algorithm after 5 iterations. Figure 4: Example results on the YouTube Celebrities dataset. 4.2. Results The author reported a mean error (×10−2 ) of 3.47. Our implementation shows a mean error(×10−2 ) of 4.95. Con- sidering that our dataset contains only one half of the im- ages the original authors used, the difference between our result and the original authors’ is acceptable. The smaller dataset might make our learning model less representative and more vulnerable to the face variance, such as the as- pect ratio of the face, the distance between different organs, and the orientation of the face, hence increase the alignment error. Fig 5a shows an example of initial landmarks with mean face normalized using the face detector and Fig 5b shows the predicted landmarks using the SDM algorithm. And the second row of Fig 3 demonstrates that the SDM algorithm’s prediction is robust under different light condi- tion, and a large variation in poses. Fig 4 shows example of tracking result on YouTube Celebrities dataset. The result video can be viewed at https://www.youtube.com/ watch?v=JCIR_BmhGfY. The processing time for the SDM algorithm is around 0.055 seconds per frame. 5. Conclusion In this paper, we summarize and compare three regres- sion based face alignment algorithm. All three method re- ports promising results. The first algorithm proposes a gen- eral framework to learn regressor and the remaining two al- gorithms extend the first one in terms of accuracy and ro- bustness to occlusion. Our implementation of the first algo- rithm reproduces its experiments on both image and video data and achieves comparable result to the one reported. References [1] M. S. Bartlett, G. C. Littlewort, M. G. Frank, C. Lain- scsek, I. R. Fasel, and J. R. Movellan. Automatic recognition of facial actions in spontaneous expres- sions. Journal of multimedia, 1(6):22–35, 2006. 1 [2] P. N. Belhumeur, D. W. Jacobs, D. J. Kriegman, and N. Kumar. Localizing parts of faces using a consen- sus of exemplars. Pattern Analysis and Machine In- telligence, IEEE Transactions on, 35(12):2930–2940, 2013. 1, 2, 3 [3] M. J. Black and A. D. Jepson. Eigentracking: Robust matching and tracking of articulated objects using a view-based representation. International Journal of Computer Vision, 26(1):63–84, 1998. 1 [4] L. Breiman. Random forests. Machine learning, 45(1):5–32, 2001. 2 [5] X. P. Burgos-Artizzu, P. Perona, and P. Doll´ar. Robust face landmark estimation under occlusion. In Com- puter Vision (ICCV), 2013 IEEE International Con- ference on, pages 1513–1520. IEEE, 2013. 3 [6] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. Liblinear: A library for large linear classi- fication. The Journal of Machine Learning Research, 9:1871–1874, 2008. 2 6
  • 7. 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 CVPR #LL CVPR #LL CVPR 2016 Submission #LL. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE. [7] M. Kim, S. Kumar, V. Pavlovic, and H. Rowley. Face tracking and recognition with visual constraints in real-world videos. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1–8. IEEE, 2008. 1 [8] V. Le, J. Brandt, Z. Lin, L. Bourdev, and T. S. Huang. Interactive facial feature localization. In Computer Vision–ECCV 2012, pages 679–692. Springer, 2012. 2, 3 [9] D. G. Lowe. Object recognition from local scale- invariant features. In Computer vision, 1999. The pro- ceedings of the seventh IEEE international conference on, volume 2, pages 1150–1157. Ieee, 1999. 1 [10] S. Ren, X. Cao, Y. Wei, and J. Sun. Face alignment at 3000 fps via regressing local binary features. In Com- puter Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 1685–1692. IEEE, 2014. 1, 2 [11] C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic. A semi-automatic methodology for facial landmark annotation. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2013 IEEE Con- ference on, pages 896–903. IEEE, 2013. 2 [12] J. Saragih. Principal regression analysis. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 2881–2888. IEEE, 2011. 1, 2, 3 [13] Y. Wu and Q. Ji. Robust facial landmark detection under significant head poses and occlusion. In Proc. Int. Conf. Comput. Vision. IEEE, volume 1, 2015. 1, 2 [14] X. Xiong and F. De la Torre. Supervised descent method and its applications to face alignment. In Com- puter Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pages 532–539. IEEE, 2013. 1 7