SlideShare a Scribd company logo
1 of 13
Download to read offline
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/322357600
Digital makeup from Internet images
Article · January 2018
CITATION
1
READS
232
4 authors, including:
Some of the authors of this publication are also working on these related projects:
Voronoi diagrams View project
Joint Venture View project
Muhammad Ahmad
National University of Computer and Emerging Sciences
67 PUBLICATIONS   398 CITATIONS   
SEE PROFILE
Ligang Liu
University of Science and Technology of China
140 PUBLICATIONS   4,379 CITATIONS   
SEE PROFILE
All content following this page was uploaded by Muhammad Ahmad on 19 March 2018.
The user has requested enhancement of the downloaded file.
Optik 158 (2018) 590–601
Contents lists available at ScienceDirect
Optik
journal homepage: www.elsevier.de/ijleo
Original research article
Digital makeup from Internet images
Asad Khana,∗
, Muhammad Ahmadb
, Yudong Guoa
, Ligang Liua
a
Graphics and Geometric Computing Lab, School of Mathematical Sciences, University of Science and Technology of China, Hefei, Anhui
230026, PR China
b
Machine Learning and Knowledge representative (MLKr) lab, Institute of Robotics, Innopolis University, Innopolis, Tatarastan, Russia
a r t i c l e i n f o
Article history:
Received 6 June 2017
Accepted 14 December 2017
MSC:
I.4.9
Keywords:
Face makeup
Semantic face information
Makeup transfer
Cosmetic-art
Applications
a b s t r a c t
We present a novel approach of creating face makeup upon a face image with another
images as the style examples. Our approach is analogous to physical makeup, as we modify
the color and skin details while preserving the face structure. More precisely, we extract
image foregrounds from both subject and multiple example images. Then by using image
matting algorithms, the system extracts the semantic information such as faces, lips, teeth,
eyes and eyebrows, from the extracted foregrounds of both subject and multiple example
images. And, then the makeup style is transferred between the corresponding parts with the
same semantic information. Next, we get the face makeup transferred result by seamlessly
compositing different parts together using alpha blending. In the final step, we present an
efficient method of makeup consistency to optimize the color of a collection of images.
The main advantage of our method over existing techniques is that it does not need face
matching, as one could use more than one example images. Because one example image
does not fulfill the complete requirements of a user. Our algorithm is not restricted to head
shot images as we can also change the makeup style in the wild. Moreover, our algorithm
does not require to choose the same pose and image size between subject and example
images. The experiment results demonstrate the effectiveness of the proposed method in
faithfully transferring makeup.
© 2017 Elsevier GmbH. All rights reserved.
1. Introduction
It has always been of special interest to humans to improve looks which can be misleading sometimes. There are increas-
ingly many commercial facial makeup systems in the market, as makeup makes an individual more attractive and beautiful.
Face makeup can be described as a technique to change an individual’s appearance by using special cosmetics such as lipstick,
foundation, powders, creams etc. It is commonly used among females to enhance attraction in their natural appearances. The
foundation and loose powder are commonly used to change the texture of face’s skin, while applying makeup physically. The
first step is usually to use the foundation to conceal flaws and cover the original skin texture, and then loose powder is mainly
used to introduce new, usually eye-catching and pleasant textures to skin. The application of other makeup constitutes like
rouge, eye liner and shadow, etc., follow on top layer of the powder. Nevertheless, color makeup transfer is still a challenging
task as almost all of the existing techniques undergo with a number of technical and computational disadvantages. Thus it
has been a contemporary focus of research to successfully transfer the color makeup in digital images.
∗ corresponding author.
E-mail addresses: asad@mail.ustc.edu.cn, asadustc@gmail.com (A. Khan).
https://doi.org/10.1016/j.ijleo.2017.12.047
0030-4026/© 2017 Elsevier GmbH. All rights reserved.
A. Khan et al. / Optik 158 (2018) 590–601 591
Fig. 1. Digital makeup from internet images. (a) A subject image, taken by a common user. (b) Example style images, download from Internet. (c) The result
of our approach, where foundation effect, eye style, eyebrow style, hair style, skin style and lip highlight in (b) are successfully transferred to (a).
After entering in a beauty salon, the customer usually selects an artistic example image from an available catalog of
example images and tells the makeup artist to apply the same makeup on her. It would be more convenient for the user to
select more than one example images of her choices from the examples catalog as it would allow her to choose different
parts of the face from different example images. Before actually applying the selected task, it would be quit helpful if she is
able to preview the same makeup style to her own face. However, it is much time-consuming. Occasionally, the customer
has two choices for trying out makeup, of which one is to try the makeup physically which is much time-consuming and
requires the customer to be patient. One of the possible way is to try the makeup digitally by ways of digital photography
by using different photo editing environments, such as Adobe Photoshop TM [1]. Besides, an online commercial software,
Taaz [3], provides users with virtual makeup on face photos by simulating the effects of specified cosmetics. But then using
such type of method is usually tedious and relies heavily on the personal expertise and efforts of the user.
This paper deals with the problem of creating makeup upon a face image (Fig. 1(a))with the prototype of multiple images
(Fig. 1(b))as the style examples. This is very practical while applying it in the scenario of the beauty salon.
Our approach is inspired by the process of physical makeup. We present an approach of creating makeup upon a face
image with the prototype of another image as the style example. In transferring face makeup between images, we have the
following challenges. First, such a technique must maintain the correspondences between meaningful image regions in an
automatic way. Secondly, for novice users, the pipeline should be intuitive and user friendly. Thirdly, an efficient technique
to optimize makeup consistency of a collection of images depicting a common scene. The generation of automatic Trimap is
another challenge as almost all of the existing techniques require a user to input a Trimap manually.
We propose an approach which transfers the face makeup from internet images of different types taken by professional
artists while taking advantage of high level semantic information. Our method can be applied by user to retouch his photo
to acquire an exquisite visual style image. In addition, we present an approach based on matrix factorization to optimize
consistency for multiple images. In order to achieve such a goal, we use sparse correspondence obtained from multi image
sparse local feature matching.
In short summary, this article makes the following contributions:
• a new face makeup transferring technique is presented which can transfer makeup between different regions of the subject
image and multiple example images with the same facial semantic information,
• we propose a new algorithm of automatic generation of Trimap for efficient synthesis of each facial semantic information,
• a semantic style transfer technique which transfers the makeup and automatically optimize the makeup consistency.
592 A. Khan et al. / Optik 158 (2018) 590–601
More importantly, our proposed method does not require the user to choose the same pose, face matching and image size
between subject and example images. We demonstrate high quality makeup consistency results on large photo collections
of internet images.
2. Related work
Not much work on digital makeup transfer has been done so far. The first attempt for the makeup transfer was made by Guo
et al. [12]. It first decomposes the subject image and example image faces into three layers. Then, they transfer information
following the layer by layer correspondence. It is considered to be a disadvantage that it requires warp the example image
to the subject face, which seems very challenging. The work of Scherbaum et al. [26] makes use of a 3D morphablle face
model to conduct facial makeup. The main disadvantage to their technique is that it requires the before-after makeup
face pairs of the same individual, which is difficult to accumulate in real application. Tong et al. [28] propose a “cosmetic
transfer” procedure to realistically transfer the cosmetic style captured in the example-pair to another person’s face. The
limitation of the applicability of the system occurs when requiring subject and example image face pairs. The automatic
makeup recommendation and synthesis system proposed by Liu et al. [19] is another work on digital makeup. They called
their recommendation beauty e-expert. But their proposed technique is considered to be in the recommendation module.
By summing up this discussion, our proposed technique enhances the applicability and is more flexible to the requirements
of digital makeup methods, thus consequently generates more attractive and eye-catching results.
There is another method proposed by Ojima et al. [24]. It also use the concept of subject-example images. But they have
only discussed the foundation effect in their work. If we study the contrast, the work by Tsumura et al. [30] proposed a
physical model to extract hemoglobin and melanin components. They simulated the changes in physical facial appearance
by adjusting the quantified amount of hemoglobin and melanin. They addressed the effects including tanning and reddening
due to cosmetic effects, aging and alcohol consumption. Whereas the cosmetic effects they have used are much limited, and
demonstrate much simplification than the real makeup. Nevertheless, there is an online commercial software called, Taaz
[3], which accommodates the users while simulating the effects of particular cosmetics and provides a virtual makeup on
facial photos.
It is also an interesting topic to focus on beautification of face photos. An automatic face photo retouching technique is
proposed by Brand and Pletscher [4] aiming to detect and remove flaws, moles, and acne from faces. Another interesting work
by Leyvand et al. [17] introduced a technique of modifying face structure to enhance the attractiveness with a disadvantage
of loosing the identity of the main subject. The idea of image processing by example can be found in image analogies [14].
Image analogies provide a general framework of rendering an image in different styles. This idea was used in work by Tong
et al. [29] and Yang et al. [31].
Recently, there is a subsequently successful fashion analysis work obtained by deep learning [20,18]. There is a recent
work on this topic by Dosovitskiy et al. [8,7]. They generate images of different objects of given type, viewpoint and color by
using a generative CNN. The work by Simonyan et al. [27] generates an image which captures a net and visualizes the notion
of the class. There is a general framework to invert both hand-crafted and deep representations to the image provided by
Mahendran et al. [22]. Gatys et al. [9] contribute a neural algorithm of artistic style based on the CNN. Goodfellow et al.
[10] presented a generative network of adverse nature and comprises two components; a generator and a discriminator.
Without using obvious artifacts, the generated image is much neutral. A much simpler and relatively faster method of
generating adversarial example image was provided by Goodfellow et al. [11]. They mainly focus on enhancing the CNN
training instead of image synthesis. The so-called Deep Convolution Inverse Graphics Network proposed by Kulkarni et al.
[16] learns an interpretable representation of images for 3D rendering. There is a considerable disadvantage by all existing
deep methods that they generate only single image. Apart from these facts, we mainly focus on how to generate a new image
comprising the nature of two subject images.
3. Digital makeup
3.1. Face database
There are bulk of image available on Internet produced by professional photographers taken with high resolution cameras.
It has always been an interesting problem to generate an image of exquisite nature by using those Internet professional artistic
photos by a simple image processing operation. Therefore, it is needed to use a database where these images are stored with
their semantic information. We segment every image of the database by using simple matting techniques while detecting
key point of human face of different facial characteristics. There are many face detection algorithms to identify a human face
in the scenes comprising easier and harder ones. In this article, we utilize the API provided by the [2] for face detection. It
uses the cutting-edge technology of computer vision and data mining to provide face detection, face recognition, and face
analysis. The API landmark which is utilized can detect the key points of a human face robustly. The landmark API detect
the position of the facial contours, facial features and other key points. Our approach detects 83 key points in the face which
are depicted in Fig. 3.
A. Khan et al. / Optik 158 (2018) 590–601 593
Fig. 2. Some results with different facial styles by considering a single subject image and a number of multiple example images.
Fig. 3. we connect the 83 key points on the face based on face detection, then the facial semantic information outline can be obtained.
3.2. Face semantics outlines
We can focus on the human face semantic analysis. In a certain order, we connect the 83 key points on the face based
on face detection, then the facial semantic information outline can be obtained. The landmark entry stores the key points of
the various parts of the human face, including eyebrows, eyes, mouth, chin, face and so on. Each part has some points, and
the points are represented by the coordinates using x and y. Using these key points, we connect them in a certain order and
then we get the contour of the face. The face semantic information outline is given in Fig. 3.
3.3. Matting of face semantics
A commonly used approach to extract semantic information is the Mean-Shift image segmentation technique [6]. How-
ever, it will produce unwanted hard boundaries between semantic regions. We employ the image matting technique to
obtain semitransparent soft boundaries. Here, we implement our automatic matting based on their matting technique by
taking advantage of our generated TriMap. Existing natural matting algorithms often require a user to identify background,
foreground, and unknown regions using a manual segmentation. However, constructing a suitable TriMap is very tedious
and time-consuming. Sometimes, the inaccuracy in TriMap will lead to a poor matting result.
In order to solve the problem mentioned above, we expand the contour of facial semantic information using an expansion
algorithm. After distinguishing the foreground, background and unknown region by different colors (we set foreground to
white, background to black, and the unknown region to gray), we can obtain a corresponding TriMap for an image. The
mathematical expression for the expansion algorithm is:
It(x, y) = max(x,y):element((x,y)) /
= 0Is(x + x
, y + y
) (1)
594 A. Khan et al. / Optik 158 (2018) 590–601
Fig. 4. Trimap in the leftmost image while transparent image and synthetic image are the medal and right respectively.
Consequently, the matting image is computed with our automatically generated TriMap Chen et al.[5]. The transparent
image and the synthetic image are shown in Fig. 4. The automatic matting approach is also applied in subject images to
obtain the basic semantic segmentation.
3.4. Semantics makeup style transfer
In order to transfer the makeup style of the example image to the subject image, we utilize a useful technique by Nguyen
et al. [23] to alter the makeup style of all the semantic regions of the input image. So far this technique is unique in its
consideration of scene illumination and the constraint that the mapped image must be within the makeup style gamut of
the example image. The mathematical equation is
Li = C−1
e (Cs(Ls)), (2)
where Ls, Li and Le are the subject luminance, intermediate luminance and example luminance respectively. Cs and Ce are
the cumulative histogram of Ls and Le respectively. Next, the output luminance Lo is obtained by solving the following linear
equation.
[I + (GT
x Gx + GT
y Gy)]L0 = Li + (GT
x Gx + GT
y Gy)Ls, (3)
where I is the identity matrix; Gx, Gy are two gradient matrices along x, and y direction;  is a regularization parameter.
Once we have transferred the color in the corresponding semantic regions, we make use of the alpha-blending scheme to
composite the whole portrait image. The alpha channel of each pixel is computed with the aforementioned image matting
technique. Fig. 5 shows the process of color transfer between the corresponding semantic information of the subject image
and the example image.
3.5. Makeup consistency optimization
The last step of our algorithm is the makeup consistency. It is an important aspect in makeup style as the image does not
seem impressive without it. We apply it once we have transferred the makeup style optimally.
In order to achieve consistent appearance between multiple example images and resulting image, we adopt a correction
model whose applicability is global on makeup with description of reasons in [13], namely robustness to alignment errors,
ease of regularization and higher efficiency due to less unknown parameters. We only solve makeup correction model for
our algorithm. This simple model is as follows:
Rs = (aRf )
, (4)
where Rs is the input image after the alpha blending, Rf is the required final image, a is a scale factor equivalent to the white
balance function [15] and (.) is the non-linear gamma mapping.
Rfi(xij) = (aivij)
i, (5)
where ai and i are the unknown global parameters for the ith image. The vij represented the pixel error captures un-modeled
color variation due to factors such as dark and light color change.
A. Khan et al. / Optik 158 (2018) 590–601 595
Fig. 5. In step 1, we specify the subject and example images, in step 2, we take semantic information after face detection, in step 3, we extract the semantic
information by using matting algorithm, in step 4, we separate all parts of given sematic information, in fifth step, we get resulting image by using alpha
blending and in the final step, we apply makeup consistency to obtain our result with optimized color. (For interpretation of the references to color in this
figure legend, the reader is referred to the web version of the article.)
In Eq. (5), taking logarithms on both side and decomposing in matrix form, by inputting the image intensities by scene
point into sparse column vectors of length m and stacking the n columns side by side, we get:
R = A + V. (6)
Here, R ∈ Rm×n is the observation matrix, where each entry R = log(Rf(xij)). A ∈ Rm×n is the color coefficient matrix where
Aij = ijlog aij. The rank of A is 1 because of a is global parameter for an image, ai1 = ai2. . .= ain.Thus, Each row of A is identical.
And V ∈ Rm×n is the residual matrix where Vij = ij log vij.
4. Results and discussion
In our method, one can use more than one example images which makes it significant enough. As sometimes the user
likes some parts of the face of one image and other parts of some other image, our technique allows the user to use different
example images. This gives the more natural results with better visual effects. The semantic analysis of the subject image
by using our interactive tool takes about 2 s, and the makeup style transfer step takes about 1 s with Matlab2014a on a PC
with an Intel(R) Core (TM) i5-4690 CPU, 3.50 GHz processor and 8GB RAM under Windows OS. In Fig. 5, the pipeline to our
technique is exhibited while explaining the different steps in a sequence. Our method comprises simple and user-friendly
steps to get completed and produces the eye-catching results with better visual effects. Once we specify the subject image
with a number of example images in first step, we take semantic information after face detection which is automatically
done by our algorithm. In next step, it makes use of the matting algorithm after extracting the semantic information. In
second last step, we get resulting image while using the alpha blending and in the final step, we obtain our final result with
optimized makeup after applying makeup consistency.
Furthermore, our main focus is inside-eye rather than eye shadow which produces more natural results. As it is contempo-
rary to use lenses inside the eyes to change colors of inside-eye, our method focuses entirely on transferring eye style inside
eyes and on hairs while preserving the boundary effectively. It is also important to remark that all the existing techniques
do not employ makeup consistency and use single example image. We focus on using multiple example image rather than
single image and employ makeup consistency which significantly enhances the practicability of our method. Consequently,
we obtain more natural contemporary results as compared with already existing techniques which only focuses on lips and
eye shadows without proper boundary preservation.
The qualitative results are shown in Fig. 6. The first column shows the subject images whereas the second column shows
the corresponding example images. The results by Guo et al. [12], Gatys et al. [9], Liu et al. [21] and ours are listed in the
third, fourth, fifth and last columns respectively. First we discuss the comparison with Guo et al. [12]. Although the results
by Guo et al. [12] looks natural but there are number of gaps which should be removed. For example, Guo et al. [12] always
transfers much lighter makeup then the example face. Since the lip gloss in the example is dark red whereas the lip gloss by
Guo et al. [12] is orange-red. Our method produces more natural results where the lip gloss is very close to dark-red in the
example image. Moreover, Guo et al. [12] can only produce very light eye shadow even though their focus is eye shadows.
Compared with Gatys et al. [9], our subject faces contain much less artifacts. It is because our makeup transfer is conducted
between the local regions, such as lip vs lip gloss while the Gatys et al. [9] transfer the makeup globally. Global makeup
596 A. Khan et al. / Optik 158 (2018) 590–601
Fig. 6. Qualitative comparisons between the state-of-the-arts and ours.
A. Khan et al. / Optik 158 (2018) 590–601 597
Fig. 7. Comparison with the results of Nguyen et al. [23] and Reinhard et al. [25].
Fig. 8. Some results of side pose where the example images are not needed to be of same pose or of same style. The example images are not specific to
different results. The texture in the background of the result in first row is preserved, also the background of the result in second row is preserved where
it comprises a combination of colors. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of the
article.)
transfer suffers from the mismatch problem between the image pair which clearly shows the disadvantage of the method
by Gatys et al. [9].
Liu et al. [21] focuses on eye shadows and lips whereas skin color remains unaffected. But the eyes shadows of both
eyes are not same in the resulting images which clearly is a mismatch and records a disadvantage. Moreover, the boundary
on the lips and eyes is not preserved. Our method focuses on inside-eyes, lips, skin and hairs while keeping the boundary
well-preserved. This produces more attractive results of exquisite nature.
In Fig. 7, a comparison of our method with the techniques by Reinhard et al. [25] and Nguyen et al. [23] is made while
giving their subject and example images. We want to compare our method to these color style techniques, because we have
transferred the makeups style using color style technique so we compare the some color style technique and show our
result efficiency. In the result by Reinhard et al. [25] it is clearly seen that the color is not transferred properly resulting in
a blur and imbalanced image. Because the color transfer algorithms in [25] adopt the method for calculating the mean and
598 A. Khan et al. / Optik 158 (2018) 590–601
Fig. 9. A drawback to our method: The image in first column shows a failure due to lack of semantic constraints and the image in second column shows
problem caused by lack of clarity of face information.
variance, so the color spatial distribution in a example image is mean to each pixel of each channel in a subject image. In
contrast, our method transfers the color style between the corresponding semantic information, so that we can gain the
spatial distribution of the example image and keep the strong visual coherence. They were not able to control the color
and boundary is not preserved as well. In the result by Nguyen et al. [23], the color is transferred but color balancing is not
properly performed, which results a darkness in right side of the cheek in the image. Also they not transferred the semantic
color style e.g. lips, eye, hair etc. Whereas in our result the color is transferred properly with efficient boundary preservation
while the background color is also transferred efficiently.
Thus our method clearly fills the gaps which were limitations to the existing techniques.
A. Khan et al. / Optik 158 (2018) 590–601 599
Fig. 10. A result with the subject image in the leftmost column and resulting images in second, third and fourth columns with respective example images.
4.1. More results on makeup transfer
In Fig. 1, an application of our method, on a subject image while considering a number of example images, is presented. We
consider four example images to transfer the makeup style in a subject image and the resulting image shows the result after
application of our technique. It is not customary to use multiple example images in earlier work on this topic. Nevertheless,
we make use of the multiple example images which enhances the efficiency of this work. In Fig. 2, a number of results of
efficient makeup transfer with different facial styles are presented. A single subject image is considered and a variety of
results, while considering four example images, are presented which shows the diversity of our technique.
In Fig. 8, we apply our algorithm to some images with side pose depiction. We see that our technique is also suitable for
side pose images and it produces the results of same quality as for other images. Moreover, the texture from the background
of the result in first row is preserved well and the background of the result in second row where it is a combination of
600 A. Khan et al. / Optik 158 (2018) 590–601
Fig. 11. Multi-pronged application of our method.
different colors is also preserved efficiently. The main purpose of this experiment is to extend the limitation of our method
from normal images to side pose images. We show that our method works for these images types as well and produce the
results of same quality.
In Fig. 10, some results of our technique are presented. As our method is not restricted to one example image only, the
results are given with multiple example images. Our technique transfers the makeup of different parts of the face like hairs,
lips and eyes etc. in the subject image by taking semantic information of similar nature from multiple example images. Our
technique transforms an image of low characteristics to an artistic and exquisite image in just a few steps. The results in
Fig. 10 shows the efficiency of our method as the results are relatively better than of existing techniques.
In Fig. 11, shows some more results of our proposed method using multiple example images. They all show the makeup-
transferred results that reflect the example colors to the subject images effectively. Moreover, the makeup preservation in
the resulting image is focused and tackled successfully. We consider images with different poses and show the effective
applicability of our method on these image types. Fig. 11, shows that our method can successfully transfer the makeup in
subject images with different styles and types e.g. side pose, back pose and other poses which are normally tough to tackle.
One can choose a suitable combination of makeups which seem more attractive with respect to different poses and styles.
Limitation: The limitations of our technique are exhibited in Fig. 9. In the image on left side, the face skin color matches
with the background color, therefore our algorithm does not detect the face and thus unable to extract the semantic infor-
mation. In the image on right side, the face information is not clear and therefore leads to a failure where the algorithm is
not able to detect the face and hence lacks extracting the facial semantic information.
5. Conclusion
We have presented a system for stylizing possible appearances of a person from a single or more photographs. We have
proposed a new semantic makeup transfer technique between images to efficiently change the makeup style of images. In
just few steps, our proposed framework can transform a common image of low characteristics to an exquisite and artistic
photo. The user is not required to select subject and example images with face matching, as our method can make use of the
multiple example images. The broad area of applications of our technique includes high level facial semantic information
and scene content analysis to transfer the makeup between images efficiently. Moreover, our technique is not restricted to
head shot images as we can also change the makeup style in the wild. The advantage of using multiple example images is to
choose your favorite makeup style from different images as you are not restricted to choose the makeup style from a single
image. A number of results are presented in different styles and conditions which shows the ubiquitousness and diversity
of our method to industrial scale. While minimizing manual labor and avoiding the time-consuming operation of semantic
segmentation for the example image, the framework of our proposed method can be broadly used in film post-production,
video-editing, art and design and image processing.
Acknowledgements
This work is supported by the National Natural Science Foundation of China (61672482, 11626253) and the One Hundred
Talent Project of the Chinese Academy of Sciences.
References
[1] Adobe photoshop (2016). http://www.adobe.com/products/photoshop.html.
[2] Face++ (2016). https://www.faceplusplus.com/.
[3] Taaz (2016). http://www.taaz.com/.
[4] M. Brand, P. Pletscher, A conditional random field for automatic photo editing, in: Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE
Conference on, IEEE, 2008, pp. 1–7.
A. Khan et al. / Optik 158 (2018) 590–601 601
[5] Q. Chen, D. Li, C.-K. Tang, Knn matting, IEEE Trans. Pattern Anal. Mach. Intell. 35 (9) (2013) 2175–2188.
[6] D. Comaneci, P.M.M. Shift, A robust approach toward feature space analysis, IEEE Trans. Pattern Anal. Mach. Intell. 24 (5) (2002).
[7] A. Dosovitskiy, T. Brox, Inverting convolutional networks with convolutional networks, CoRR abs/1506.02753 (2015).
[8] A. Dosovitskiy, J. Tobias Springenberg, T. Brox, Learning to generate chairs with convolutional neural networks, Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition (2015) 1538–1546.
[9] L.A. Gatys, A.S. Ecker, M. Bethge, A Neural Algorithm of Artistic Style, 2015 arXiv preprint arXiv:1508.06576.
[10] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, Generative adversarial nets, Advances in Neural
Information Processing Systems (2014) 2672–2680.
[11] I.J. Goodfellow, J. Shlens, C. Szegedy, Explaining and Harnessing Adversarial Examples, 2014 arXiv preprint arXiv:1412.6572.
[12] D. Guo, T. Sim, Digital face makeup by example, in: Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, IEEE, 2009, pp.
73–79.
[13] Y. HaCohen, E. Shechtman, D.B. Goldman, D. Lischinski, Optimizing color consistency in photo collections, ACM Trans. Graph. (TOG) 32 (4) (2013) 38.
[14] A. Hertzmann, C.E. Jacobs, N. Oliver, B. Curless, D.H. Salesin, Image analogies, in: Proceedings of the 28th Annual Conference on Computer Graphics
and Interactive Techniques, ACM, 2001, pp. 327–340.
[15] E. Hsu, T. Mertens, S. Paris, S. Avidan, F. Durand, Light mixture estimation for spatially varying white balance, in: ACM Transactions on Graphics
(TOG), vol. 27, ACM, 2008, p. 70.
[16] T.D. Kulkarni, W.F. Whitney, P. Kohli, J. Tenenbaum, Deep convolutional inverse graphics network, Advances in Neural Information Processing
Systems (2015) 2539–2547.
[17] T. Leyvand, D. Cohen-Or, G. Dror, D. Lischinski, Data-driven enhancement of facial attractiveness, in: ACM Transactions on Graphics (TOG), vol. 27,
ACM, 2008, p. 38.
[18] X. Liang, S. Liu, X. Shen, J. Yang, L. Liu, J. Dong, L. Lin, S. Yan, Deep human parsing with active template regression, IEEE Trans. Pattern Anal. Mach.
Intell. 37 (12) (2015) 2402–2414.
[19] L. Liu, J. Xing, S. Liu, H. Xu, X. Zhou, S. Yan, Wow! You are so beautiful today!, ACM Trans. Multimedia Comput. Commun. Appl. (TOMM) 11 (1s)
(2014) 20.
[20] S. Liu, X. Liang, L. Liu, X. Shen, J. Yang, C. Xu, L. Lin, X. Cao, S. Yan, Matching-CNN meets KNN: Quasi-parametric human parsing, Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition (2015) 1419–1427.
[21] S. Liu, X. Ou, R. Qian, W. Wang, X. Cao, Makeup Like a Superstar: Deep Localized Makeup Transfer Network, 2016 arXiv preprint arXiv:1604.07102.
[22] A. Mahendran, A. Vedaldi, Understanding deep image representations by inverting them, Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition (2015) 5188–5196.
[23] R. Nguyen, S. Kim, M. Brown, Illuminant aware gamut-based color transfer Computer Graphics Forum, vol. 33, Wiley Online Library, 2014, pp.
319–328.
[24] N. Ojima, K. Yoshida, O. Osanai, S. Akasaki, Image synthesis of cosmetic applied skin based on optical properties of foundation layers, Proceedings of
International Congress of Imaging Science (1999) 467–468.
[25] E. Reinhard, M. Adhikhmin, B. Gooch, P. Shirley, Color transfer between images, IEEE Comput. Graph. Appl. 21 (5) (2001) 34–41.
[26] K. Scherbaum, T. Ritschel, M. Hullin, T. Thormählen, V. Blanz, H.-P. Seidel, Computer-suggested facial makeup Computer Graphics Forum, vol. 30,
Wiley Online Library, 2011, pp. 485–492.
[27] K. Simonyan, A. Vedaldi, A. Zisserman, Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps (2014), 2013
arXiv preprint arXiv:1312.6034.
[28] W.-S. Tong, C.-K. Tang, M.S. Brown, Y.-Q. Xu, Example-based cosmetic transfer, in: Computer Graphics and Applications, 2007. PG’07. 15th Pacific
Conference on, IEEE, 2007, pp. 211–218.
[29] W.-S. Tong, C.-K. Tang, M.S. Brown, Y.-Q. Xu, Example-based cosmetic transfer, in: Computer Graphics and Applications, 2007. PG’07. 15th Pacific
Conference on, IEEE, 2007, pp. 211–218.
[30] N. Tsumura, N. Ojima, K. Sato, M. Shiraishi, H. Shimizu, H. Nabeshima, S. Akazaki, K. Hori, Y. Miyake, Image-based skin color and texture
analysis/synthesis by extracting hemoglobin and melanin information in the skin, ACM Trans. Graph. (TOG) 22 (3) (2003) 770–779.
[31] Y. Yang, H. Zhao, L. You, R. Tu, X. Wu, X. Jin, Semantic portrait color transfer with Internet images, Multimedia Tools Appl. 76 (1) (2017) 523–541.
View publication stats
View publication stats

More Related Content

Similar to 28 of the Best Makeup Brands We'll Never Grow Tired Of the 💅👄💋😍

IRJET- Automatic Suggestion of Outfits using Image Processing
IRJET- Automatic Suggestion of Outfits using Image ProcessingIRJET- Automatic Suggestion of Outfits using Image Processing
IRJET- Automatic Suggestion of Outfits using Image ProcessingIRJET Journal
 
IRJET - Facial Recognition based Attendance Management System
IRJET - Facial Recognition based Attendance Management SystemIRJET - Facial Recognition based Attendance Management System
IRJET - Facial Recognition based Attendance Management SystemIRJET Journal
 
IRJET - Facial In-Painting using Deep Learning in Machine Learning
IRJET -  	  Facial In-Painting using Deep Learning in Machine LearningIRJET -  	  Facial In-Painting using Deep Learning in Machine Learning
IRJET - Facial In-Painting using Deep Learning in Machine LearningIRJET Journal
 
Face recogntion using PCA algorithm
Face recogntion using PCA algorithmFace recogntion using PCA algorithm
Face recogntion using PCA algorithmAshwini Awatare
 
IRJET - Face Recognition based Attendance System: Review
IRJET -  	  Face Recognition based Attendance System: ReviewIRJET -  	  Face Recognition based Attendance System: Review
IRJET - Face Recognition based Attendance System: ReviewIRJET Journal
 
A Hybrid Approach to Recognize Facial Image using Feature Extraction Method
A Hybrid Approach to Recognize Facial Image using Feature Extraction MethodA Hybrid Approach to Recognize Facial Image using Feature Extraction Method
A Hybrid Approach to Recognize Facial Image using Feature Extraction MethodIRJET Journal
 
Neural Net: Machine Learning Web Application
Neural Net: Machine Learning Web ApplicationNeural Net: Machine Learning Web Application
Neural Net: Machine Learning Web ApplicationIRJET Journal
 
Facial Image Restoration Using GAN Deep Learning Model
Facial Image Restoration Using GAN Deep Learning ModelFacial Image Restoration Using GAN Deep Learning Model
Facial Image Restoration Using GAN Deep Learning ModelIRJET Journal
 
HANDWRITTEN DIGIT RECOGNITION USING MACHINE LEARNING
HANDWRITTEN DIGIT RECOGNITION USING MACHINE LEARNINGHANDWRITTEN DIGIT RECOGNITION USING MACHINE LEARNING
HANDWRITTEN DIGIT RECOGNITION USING MACHINE LEARNINGIRJET Journal
 
HANDWRITTEN DIGIT RECOGNITION USING MACHINE LEARNING
HANDWRITTEN DIGIT RECOGNITION USING MACHINE LEARNINGHANDWRITTEN DIGIT RECOGNITION USING MACHINE LEARNING
HANDWRITTEN DIGIT RECOGNITION USING MACHINE LEARNINGIRJET Journal
 
TAG ME: An Accurate Name Tagging System for Web Facial Images using Search-Ba...
TAG ME: An Accurate Name Tagging System for Web Facial Images using Search-Ba...TAG ME: An Accurate Name Tagging System for Web Facial Images using Search-Ba...
TAG ME: An Accurate Name Tagging System for Web Facial Images using Search-Ba...rahulmonikasharma
 
Project Report on High Speed Face Recognitions using DCT, RBF nueral network
Project Report on High Speed Face Recognitions using DCT, RBF nueral networkProject Report on High Speed Face Recognitions using DCT, RBF nueral network
Project Report on High Speed Face Recognitions using DCT, RBF nueral networkSagar Rai
 
IRJET- An Improvised Multi Focus Image Fusion Algorithm through Quadtree
IRJET- An Improvised Multi Focus Image Fusion Algorithm through QuadtreeIRJET- An Improvised Multi Focus Image Fusion Algorithm through Quadtree
IRJET- An Improvised Multi Focus Image Fusion Algorithm through QuadtreeIRJET Journal
 
Mining Weakly Labeled Web Facial Images for Search-Based Face Annotation
Mining Weakly Labeled Web Facial Images for Search-Based Face AnnotationMining Weakly Labeled Web Facial Images for Search-Based Face Annotation
Mining Weakly Labeled Web Facial Images for Search-Based Face Annotationjohn236zaq
 
FACE EXPRESSION IDENTIFICATION USING IMAGE FEATURE CLUSTRING AND QUERY SCHEME...
FACE EXPRESSION IDENTIFICATION USING IMAGE FEATURE CLUSTRING AND QUERY SCHEME...FACE EXPRESSION IDENTIFICATION USING IMAGE FEATURE CLUSTRING AND QUERY SCHEME...
FACE EXPRESSION IDENTIFICATION USING IMAGE FEATURE CLUSTRING AND QUERY SCHEME...Editor IJMTER
 
ALGORITHMIC THINKING-A PARAMETRIC APPROACH TO PROBLEM SOLVING.
ALGORITHMIC THINKING-A PARAMETRIC APPROACH TO PROBLEM SOLVING.ALGORITHMIC THINKING-A PARAMETRIC APPROACH TO PROBLEM SOLVING.
ALGORITHMIC THINKING-A PARAMETRIC APPROACH TO PROBLEM SOLVING.Kate Campbell
 
Analysis of student sentiment during video class with multi-layer deep learni...
Analysis of student sentiment during video class with multi-layer deep learni...Analysis of student sentiment during video class with multi-layer deep learni...
Analysis of student sentiment during video class with multi-layer deep learni...IJECEIAES
 
Improved Approach for Eigenface Recognition
Improved Approach for Eigenface RecognitionImproved Approach for Eigenface Recognition
Improved Approach for Eigenface RecognitionBRNSSPublicationHubI
 
Medical waste segregation using deep learning
Medical waste segregation using deep learningMedical waste segregation using deep learning
Medical waste segregation using deep learningSairam Adithya
 
IRJET- Face Counter using Matlab
IRJET-  	  Face Counter using MatlabIRJET-  	  Face Counter using Matlab
IRJET- Face Counter using MatlabIRJET Journal
 

Similar to 28 of the Best Makeup Brands We'll Never Grow Tired Of the 💅👄💋😍 (20)

IRJET- Automatic Suggestion of Outfits using Image Processing
IRJET- Automatic Suggestion of Outfits using Image ProcessingIRJET- Automatic Suggestion of Outfits using Image Processing
IRJET- Automatic Suggestion of Outfits using Image Processing
 
IRJET - Facial Recognition based Attendance Management System
IRJET - Facial Recognition based Attendance Management SystemIRJET - Facial Recognition based Attendance Management System
IRJET - Facial Recognition based Attendance Management System
 
IRJET - Facial In-Painting using Deep Learning in Machine Learning
IRJET -  	  Facial In-Painting using Deep Learning in Machine LearningIRJET -  	  Facial In-Painting using Deep Learning in Machine Learning
IRJET - Facial In-Painting using Deep Learning in Machine Learning
 
Face recogntion using PCA algorithm
Face recogntion using PCA algorithmFace recogntion using PCA algorithm
Face recogntion using PCA algorithm
 
IRJET - Face Recognition based Attendance System: Review
IRJET -  	  Face Recognition based Attendance System: ReviewIRJET -  	  Face Recognition based Attendance System: Review
IRJET - Face Recognition based Attendance System: Review
 
A Hybrid Approach to Recognize Facial Image using Feature Extraction Method
A Hybrid Approach to Recognize Facial Image using Feature Extraction MethodA Hybrid Approach to Recognize Facial Image using Feature Extraction Method
A Hybrid Approach to Recognize Facial Image using Feature Extraction Method
 
Neural Net: Machine Learning Web Application
Neural Net: Machine Learning Web ApplicationNeural Net: Machine Learning Web Application
Neural Net: Machine Learning Web Application
 
Facial Image Restoration Using GAN Deep Learning Model
Facial Image Restoration Using GAN Deep Learning ModelFacial Image Restoration Using GAN Deep Learning Model
Facial Image Restoration Using GAN Deep Learning Model
 
HANDWRITTEN DIGIT RECOGNITION USING MACHINE LEARNING
HANDWRITTEN DIGIT RECOGNITION USING MACHINE LEARNINGHANDWRITTEN DIGIT RECOGNITION USING MACHINE LEARNING
HANDWRITTEN DIGIT RECOGNITION USING MACHINE LEARNING
 
HANDWRITTEN DIGIT RECOGNITION USING MACHINE LEARNING
HANDWRITTEN DIGIT RECOGNITION USING MACHINE LEARNINGHANDWRITTEN DIGIT RECOGNITION USING MACHINE LEARNING
HANDWRITTEN DIGIT RECOGNITION USING MACHINE LEARNING
 
TAG ME: An Accurate Name Tagging System for Web Facial Images using Search-Ba...
TAG ME: An Accurate Name Tagging System for Web Facial Images using Search-Ba...TAG ME: An Accurate Name Tagging System for Web Facial Images using Search-Ba...
TAG ME: An Accurate Name Tagging System for Web Facial Images using Search-Ba...
 
Project Report on High Speed Face Recognitions using DCT, RBF nueral network
Project Report on High Speed Face Recognitions using DCT, RBF nueral networkProject Report on High Speed Face Recognitions using DCT, RBF nueral network
Project Report on High Speed Face Recognitions using DCT, RBF nueral network
 
IRJET- An Improvised Multi Focus Image Fusion Algorithm through Quadtree
IRJET- An Improvised Multi Focus Image Fusion Algorithm through QuadtreeIRJET- An Improvised Multi Focus Image Fusion Algorithm through Quadtree
IRJET- An Improvised Multi Focus Image Fusion Algorithm through Quadtree
 
Mining Weakly Labeled Web Facial Images for Search-Based Face Annotation
Mining Weakly Labeled Web Facial Images for Search-Based Face AnnotationMining Weakly Labeled Web Facial Images for Search-Based Face Annotation
Mining Weakly Labeled Web Facial Images for Search-Based Face Annotation
 
FACE EXPRESSION IDENTIFICATION USING IMAGE FEATURE CLUSTRING AND QUERY SCHEME...
FACE EXPRESSION IDENTIFICATION USING IMAGE FEATURE CLUSTRING AND QUERY SCHEME...FACE EXPRESSION IDENTIFICATION USING IMAGE FEATURE CLUSTRING AND QUERY SCHEME...
FACE EXPRESSION IDENTIFICATION USING IMAGE FEATURE CLUSTRING AND QUERY SCHEME...
 
ALGORITHMIC THINKING-A PARAMETRIC APPROACH TO PROBLEM SOLVING.
ALGORITHMIC THINKING-A PARAMETRIC APPROACH TO PROBLEM SOLVING.ALGORITHMIC THINKING-A PARAMETRIC APPROACH TO PROBLEM SOLVING.
ALGORITHMIC THINKING-A PARAMETRIC APPROACH TO PROBLEM SOLVING.
 
Analysis of student sentiment during video class with multi-layer deep learni...
Analysis of student sentiment during video class with multi-layer deep learni...Analysis of student sentiment during video class with multi-layer deep learni...
Analysis of student sentiment during video class with multi-layer deep learni...
 
Improved Approach for Eigenface Recognition
Improved Approach for Eigenface RecognitionImproved Approach for Eigenface Recognition
Improved Approach for Eigenface Recognition
 
Medical waste segregation using deep learning
Medical waste segregation using deep learningMedical waste segregation using deep learning
Medical waste segregation using deep learning
 
IRJET- Face Counter using Matlab
IRJET-  	  Face Counter using MatlabIRJET-  	  Face Counter using Matlab
IRJET- Face Counter using Matlab
 

Recently uploaded

Rudraprayag call girls 📞 8617697112 At Low Cost Cash Payment Booking
Rudraprayag call girls 📞 8617697112 At Low Cost Cash Payment BookingRudraprayag call girls 📞 8617697112 At Low Cost Cash Payment Booking
Rudraprayag call girls 📞 8617697112 At Low Cost Cash Payment BookingNitya salvi
 
Call Girls Udaipur Just Call 9602870969 Top Class Call Girl Service Available
Call Girls Udaipur Just Call 9602870969 Top Class Call Girl Service AvailableCall Girls Udaipur Just Call 9602870969 Top Class Call Girl Service Available
Call Girls Udaipur Just Call 9602870969 Top Class Call Girl Service AvailableApsara Of India
 
Call Girl In Zirakpur👧 Book Now📱7837612180 📞👉Zirakpur Call Girls Service No A...
Call Girl In Zirakpur👧 Book Now📱7837612180 📞👉Zirakpur Call Girls Service No A...Call Girl In Zirakpur👧 Book Now📱7837612180 📞👉Zirakpur Call Girls Service No A...
Call Girl In Zirakpur👧 Book Now📱7837612180 📞👉Zirakpur Call Girls Service No A...Sheetaleventcompany
 
Night 7k to 12k Chennai Call Girls 👉👉 8617697112⭐⭐ 100% Genuine Escort Servic...
Night 7k to 12k Chennai Call Girls 👉👉 8617697112⭐⭐ 100% Genuine Escort Servic...Night 7k to 12k Chennai Call Girls 👉👉 8617697112⭐⭐ 100% Genuine Escort Servic...
Night 7k to 12k Chennai Call Girls 👉👉 8617697112⭐⭐ 100% Genuine Escort Servic...Nitya salvi
 
Chandigarh Escorts Service 📞9915851334📞 Just📲 Call Rajveer Chandigarh Call Gi...
Chandigarh Escorts Service 📞9915851334📞 Just📲 Call Rajveer Chandigarh Call Gi...Chandigarh Escorts Service 📞9915851334📞 Just📲 Call Rajveer Chandigarh Call Gi...
Chandigarh Escorts Service 📞9915851334📞 Just📲 Call Rajveer Chandigarh Call Gi...rajveermohali2022
 
Chandigarh Escort Service 📞9878281761📞 Just📲 Call Navi Chandigarh Call Girls ...
Chandigarh Escort Service 📞9878281761📞 Just📲 Call Navi Chandigarh Call Girls ...Chandigarh Escort Service 📞9878281761📞 Just📲 Call Navi Chandigarh Call Girls ...
Chandigarh Escort Service 📞9878281761📞 Just📲 Call Navi Chandigarh Call Girls ...Sheetaleventcompany
 
All Hotel Karnal Call Girls 08168329307 Noor Mahal Karnal Escort Service
All Hotel Karnal Call Girls 08168329307 Noor Mahal Karnal Escort ServiceAll Hotel Karnal Call Girls 08168329307 Noor Mahal Karnal Escort Service
All Hotel Karnal Call Girls 08168329307 Noor Mahal Karnal Escort ServiceApsara Of India
 
9892124323 Pooja Nehwal - Book Local Housewife call girls in Nalasopara at Ch...
9892124323 Pooja Nehwal - Book Local Housewife call girls in Nalasopara at Ch...9892124323 Pooja Nehwal - Book Local Housewife call girls in Nalasopara at Ch...
9892124323 Pooja Nehwal - Book Local Housewife call girls in Nalasopara at Ch...Pooja Nehwal
 
VIP 💞🌷Call Girls In Karnal 08168329307 Escorts Service Nilokheri Call Girls
VIP 💞🌷Call Girls In Karnal 08168329307 Escorts Service Nilokheri Call GirlsVIP 💞🌷Call Girls In Karnal 08168329307 Escorts Service Nilokheri Call Girls
VIP 💞🌷Call Girls In Karnal 08168329307 Escorts Service Nilokheri Call GirlsApsara Of India
 
Call Girls in Bangalore Nisha 💋9136956627 Bangalore Call Girls
Call Girls in Bangalore Nisha 💋9136956627 Bangalore Call GirlsCall Girls in Bangalore Nisha 💋9136956627 Bangalore Call Girls
Call Girls in Bangalore Nisha 💋9136956627 Bangalore Call GirlsPinki Misra
 
Call Girls in Bangalore Lavya 💋9136956627 Bangalore Call Girls
Call Girls in Bangalore Lavya 💋9136956627 Bangalore Call GirlsCall Girls in Bangalore Lavya 💋9136956627 Bangalore Call Girls
Call Girls in Bangalore Lavya 💋9136956627 Bangalore Call GirlsPinki Misra
 
Bangalore Escorts 📞9955608600📞 Just📲 Call Rajveer Call Girls Service In Benga...
Bangalore Escorts 📞9955608600📞 Just📲 Call Rajveer Call Girls Service In Benga...Bangalore Escorts 📞9955608600📞 Just📲 Call Rajveer Call Girls Service In Benga...
Bangalore Escorts 📞9955608600📞 Just📲 Call Rajveer Call Girls Service In Benga...Sheetaleventcompany
 
Pooja : 9892124323, Dharavi Call Girls. 7000 Cash Payment Free Home Delivery
Pooja : 9892124323, Dharavi Call Girls. 7000 Cash Payment Free Home DeliveryPooja : 9892124323, Dharavi Call Girls. 7000 Cash Payment Free Home Delivery
Pooja : 9892124323, Dharavi Call Girls. 7000 Cash Payment Free Home DeliveryPooja Nehwal
 
New Call Girls In Panipat 08168329307 Shamli Israna Escorts Service
New Call Girls In Panipat 08168329307 Shamli Israna Escorts ServiceNew Call Girls In Panipat 08168329307 Shamli Israna Escorts Service
New Call Girls In Panipat 08168329307 Shamli Israna Escorts ServiceApsara Of India
 
👉Amritsar Call Girl 👉📞 8725944379 👉📞 Just📲 Call Mack Call Girls Service In Am...
👉Amritsar Call Girl 👉📞 8725944379 👉📞 Just📲 Call Mack Call Girls Service In Am...👉Amritsar Call Girl 👉📞 8725944379 👉📞 Just📲 Call Mack Call Girls Service In Am...
👉Amritsar Call Girl 👉📞 8725944379 👉📞 Just📲 Call Mack Call Girls Service In Am...Sheetaleventcompany
 
👉Chandigarh Call Girls 📞Book Now📞👉 9878799926 👉Zirakpur Call Girl Service No ...
👉Chandigarh Call Girls 📞Book Now📞👉 9878799926 👉Zirakpur Call Girl Service No ...👉Chandigarh Call Girls 📞Book Now📞👉 9878799926 👉Zirakpur Call Girl Service No ...
👉Chandigarh Call Girls 📞Book Now📞👉 9878799926 👉Zirakpur Call Girl Service No ...rajveerescorts2022
 
Zirakpur Call Girls Service ❤️🍑 7837612180 👄🫦Independent Escort Service Zirakpur
Zirakpur Call Girls Service ❤️🍑 7837612180 👄🫦Independent Escort Service ZirakpurZirakpur Call Girls Service ❤️🍑 7837612180 👄🫦Independent Escort Service Zirakpur
Zirakpur Call Girls Service ❤️🍑 7837612180 👄🫦Independent Escort Service ZirakpurSheetaleventcompany
 
Call Girls in Bangalore Prachi 💋9136956627 Bangalore Call Girls
Call Girls in  Bangalore Prachi 💋9136956627 Bangalore Call GirlsCall Girls in  Bangalore Prachi 💋9136956627 Bangalore Call Girls
Call Girls in Bangalore Prachi 💋9136956627 Bangalore Call GirlsPinki Misra
 
Call Girls Service In Udaipur 9602870969 Sajjangarh Udaipur EsCoRtS
Call Girls Service In Udaipur 9602870969 Sajjangarh Udaipur EsCoRtSCall Girls Service In Udaipur 9602870969 Sajjangarh Udaipur EsCoRtS
Call Girls Service In Udaipur 9602870969 Sajjangarh Udaipur EsCoRtSApsara Of India
 
💗📲09602870969💕-Royal Escorts in Udaipur Call Girls Service Udaipole-Fateh Sag...
💗📲09602870969💕-Royal Escorts in Udaipur Call Girls Service Udaipole-Fateh Sag...💗📲09602870969💕-Royal Escorts in Udaipur Call Girls Service Udaipole-Fateh Sag...
💗📲09602870969💕-Royal Escorts in Udaipur Call Girls Service Udaipole-Fateh Sag...Apsara Of India
 

Recently uploaded (20)

Rudraprayag call girls 📞 8617697112 At Low Cost Cash Payment Booking
Rudraprayag call girls 📞 8617697112 At Low Cost Cash Payment BookingRudraprayag call girls 📞 8617697112 At Low Cost Cash Payment Booking
Rudraprayag call girls 📞 8617697112 At Low Cost Cash Payment Booking
 
Call Girls Udaipur Just Call 9602870969 Top Class Call Girl Service Available
Call Girls Udaipur Just Call 9602870969 Top Class Call Girl Service AvailableCall Girls Udaipur Just Call 9602870969 Top Class Call Girl Service Available
Call Girls Udaipur Just Call 9602870969 Top Class Call Girl Service Available
 
Call Girl In Zirakpur👧 Book Now📱7837612180 📞👉Zirakpur Call Girls Service No A...
Call Girl In Zirakpur👧 Book Now📱7837612180 📞👉Zirakpur Call Girls Service No A...Call Girl In Zirakpur👧 Book Now📱7837612180 📞👉Zirakpur Call Girls Service No A...
Call Girl In Zirakpur👧 Book Now📱7837612180 📞👉Zirakpur Call Girls Service No A...
 
Night 7k to 12k Chennai Call Girls 👉👉 8617697112⭐⭐ 100% Genuine Escort Servic...
Night 7k to 12k Chennai Call Girls 👉👉 8617697112⭐⭐ 100% Genuine Escort Servic...Night 7k to 12k Chennai Call Girls 👉👉 8617697112⭐⭐ 100% Genuine Escort Servic...
Night 7k to 12k Chennai Call Girls 👉👉 8617697112⭐⭐ 100% Genuine Escort Servic...
 
Chandigarh Escorts Service 📞9915851334📞 Just📲 Call Rajveer Chandigarh Call Gi...
Chandigarh Escorts Service 📞9915851334📞 Just📲 Call Rajveer Chandigarh Call Gi...Chandigarh Escorts Service 📞9915851334📞 Just📲 Call Rajveer Chandigarh Call Gi...
Chandigarh Escorts Service 📞9915851334📞 Just📲 Call Rajveer Chandigarh Call Gi...
 
Chandigarh Escort Service 📞9878281761📞 Just📲 Call Navi Chandigarh Call Girls ...
Chandigarh Escort Service 📞9878281761📞 Just📲 Call Navi Chandigarh Call Girls ...Chandigarh Escort Service 📞9878281761📞 Just📲 Call Navi Chandigarh Call Girls ...
Chandigarh Escort Service 📞9878281761📞 Just📲 Call Navi Chandigarh Call Girls ...
 
All Hotel Karnal Call Girls 08168329307 Noor Mahal Karnal Escort Service
All Hotel Karnal Call Girls 08168329307 Noor Mahal Karnal Escort ServiceAll Hotel Karnal Call Girls 08168329307 Noor Mahal Karnal Escort Service
All Hotel Karnal Call Girls 08168329307 Noor Mahal Karnal Escort Service
 
9892124323 Pooja Nehwal - Book Local Housewife call girls in Nalasopara at Ch...
9892124323 Pooja Nehwal - Book Local Housewife call girls in Nalasopara at Ch...9892124323 Pooja Nehwal - Book Local Housewife call girls in Nalasopara at Ch...
9892124323 Pooja Nehwal - Book Local Housewife call girls in Nalasopara at Ch...
 
VIP 💞🌷Call Girls In Karnal 08168329307 Escorts Service Nilokheri Call Girls
VIP 💞🌷Call Girls In Karnal 08168329307 Escorts Service Nilokheri Call GirlsVIP 💞🌷Call Girls In Karnal 08168329307 Escorts Service Nilokheri Call Girls
VIP 💞🌷Call Girls In Karnal 08168329307 Escorts Service Nilokheri Call Girls
 
Call Girls in Bangalore Nisha 💋9136956627 Bangalore Call Girls
Call Girls in Bangalore Nisha 💋9136956627 Bangalore Call GirlsCall Girls in Bangalore Nisha 💋9136956627 Bangalore Call Girls
Call Girls in Bangalore Nisha 💋9136956627 Bangalore Call Girls
 
Call Girls in Bangalore Lavya 💋9136956627 Bangalore Call Girls
Call Girls in Bangalore Lavya 💋9136956627 Bangalore Call GirlsCall Girls in Bangalore Lavya 💋9136956627 Bangalore Call Girls
Call Girls in Bangalore Lavya 💋9136956627 Bangalore Call Girls
 
Bangalore Escorts 📞9955608600📞 Just📲 Call Rajveer Call Girls Service In Benga...
Bangalore Escorts 📞9955608600📞 Just📲 Call Rajveer Call Girls Service In Benga...Bangalore Escorts 📞9955608600📞 Just📲 Call Rajveer Call Girls Service In Benga...
Bangalore Escorts 📞9955608600📞 Just📲 Call Rajveer Call Girls Service In Benga...
 
Pooja : 9892124323, Dharavi Call Girls. 7000 Cash Payment Free Home Delivery
Pooja : 9892124323, Dharavi Call Girls. 7000 Cash Payment Free Home DeliveryPooja : 9892124323, Dharavi Call Girls. 7000 Cash Payment Free Home Delivery
Pooja : 9892124323, Dharavi Call Girls. 7000 Cash Payment Free Home Delivery
 
New Call Girls In Panipat 08168329307 Shamli Israna Escorts Service
New Call Girls In Panipat 08168329307 Shamli Israna Escorts ServiceNew Call Girls In Panipat 08168329307 Shamli Israna Escorts Service
New Call Girls In Panipat 08168329307 Shamli Israna Escorts Service
 
👉Amritsar Call Girl 👉📞 8725944379 👉📞 Just📲 Call Mack Call Girls Service In Am...
👉Amritsar Call Girl 👉📞 8725944379 👉📞 Just📲 Call Mack Call Girls Service In Am...👉Amritsar Call Girl 👉📞 8725944379 👉📞 Just📲 Call Mack Call Girls Service In Am...
👉Amritsar Call Girl 👉📞 8725944379 👉📞 Just📲 Call Mack Call Girls Service In Am...
 
👉Chandigarh Call Girls 📞Book Now📞👉 9878799926 👉Zirakpur Call Girl Service No ...
👉Chandigarh Call Girls 📞Book Now📞👉 9878799926 👉Zirakpur Call Girl Service No ...👉Chandigarh Call Girls 📞Book Now📞👉 9878799926 👉Zirakpur Call Girl Service No ...
👉Chandigarh Call Girls 📞Book Now📞👉 9878799926 👉Zirakpur Call Girl Service No ...
 
Zirakpur Call Girls Service ❤️🍑 7837612180 👄🫦Independent Escort Service Zirakpur
Zirakpur Call Girls Service ❤️🍑 7837612180 👄🫦Independent Escort Service ZirakpurZirakpur Call Girls Service ❤️🍑 7837612180 👄🫦Independent Escort Service Zirakpur
Zirakpur Call Girls Service ❤️🍑 7837612180 👄🫦Independent Escort Service Zirakpur
 
Call Girls in Bangalore Prachi 💋9136956627 Bangalore Call Girls
Call Girls in  Bangalore Prachi 💋9136956627 Bangalore Call GirlsCall Girls in  Bangalore Prachi 💋9136956627 Bangalore Call Girls
Call Girls in Bangalore Prachi 💋9136956627 Bangalore Call Girls
 
Call Girls Service In Udaipur 9602870969 Sajjangarh Udaipur EsCoRtS
Call Girls Service In Udaipur 9602870969 Sajjangarh Udaipur EsCoRtSCall Girls Service In Udaipur 9602870969 Sajjangarh Udaipur EsCoRtS
Call Girls Service In Udaipur 9602870969 Sajjangarh Udaipur EsCoRtS
 
💗📲09602870969💕-Royal Escorts in Udaipur Call Girls Service Udaipole-Fateh Sag...
💗📲09602870969💕-Royal Escorts in Udaipur Call Girls Service Udaipole-Fateh Sag...💗📲09602870969💕-Royal Escorts in Udaipur Call Girls Service Udaipole-Fateh Sag...
💗📲09602870969💕-Royal Escorts in Udaipur Call Girls Service Udaipole-Fateh Sag...
 

28 of the Best Makeup Brands We'll Never Grow Tired Of the 💅👄💋😍

  • 1. See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/322357600 Digital makeup from Internet images Article · January 2018 CITATION 1 READS 232 4 authors, including: Some of the authors of this publication are also working on these related projects: Voronoi diagrams View project Joint Venture View project Muhammad Ahmad National University of Computer and Emerging Sciences 67 PUBLICATIONS   398 CITATIONS    SEE PROFILE Ligang Liu University of Science and Technology of China 140 PUBLICATIONS   4,379 CITATIONS    SEE PROFILE All content following this page was uploaded by Muhammad Ahmad on 19 March 2018. The user has requested enhancement of the downloaded file.
  • 2. Optik 158 (2018) 590–601 Contents lists available at ScienceDirect Optik journal homepage: www.elsevier.de/ijleo Original research article Digital makeup from Internet images Asad Khana,∗ , Muhammad Ahmadb , Yudong Guoa , Ligang Liua a Graphics and Geometric Computing Lab, School of Mathematical Sciences, University of Science and Technology of China, Hefei, Anhui 230026, PR China b Machine Learning and Knowledge representative (MLKr) lab, Institute of Robotics, Innopolis University, Innopolis, Tatarastan, Russia a r t i c l e i n f o Article history: Received 6 June 2017 Accepted 14 December 2017 MSC: I.4.9 Keywords: Face makeup Semantic face information Makeup transfer Cosmetic-art Applications a b s t r a c t We present a novel approach of creating face makeup upon a face image with another images as the style examples. Our approach is analogous to physical makeup, as we modify the color and skin details while preserving the face structure. More precisely, we extract image foregrounds from both subject and multiple example images. Then by using image matting algorithms, the system extracts the semantic information such as faces, lips, teeth, eyes and eyebrows, from the extracted foregrounds of both subject and multiple example images. And, then the makeup style is transferred between the corresponding parts with the same semantic information. Next, we get the face makeup transferred result by seamlessly compositing different parts together using alpha blending. In the final step, we present an efficient method of makeup consistency to optimize the color of a collection of images. The main advantage of our method over existing techniques is that it does not need face matching, as one could use more than one example images. Because one example image does not fulfill the complete requirements of a user. Our algorithm is not restricted to head shot images as we can also change the makeup style in the wild. Moreover, our algorithm does not require to choose the same pose and image size between subject and example images. The experiment results demonstrate the effectiveness of the proposed method in faithfully transferring makeup. © 2017 Elsevier GmbH. All rights reserved. 1. Introduction It has always been of special interest to humans to improve looks which can be misleading sometimes. There are increas- ingly many commercial facial makeup systems in the market, as makeup makes an individual more attractive and beautiful. Face makeup can be described as a technique to change an individual’s appearance by using special cosmetics such as lipstick, foundation, powders, creams etc. It is commonly used among females to enhance attraction in their natural appearances. The foundation and loose powder are commonly used to change the texture of face’s skin, while applying makeup physically. The first step is usually to use the foundation to conceal flaws and cover the original skin texture, and then loose powder is mainly used to introduce new, usually eye-catching and pleasant textures to skin. The application of other makeup constitutes like rouge, eye liner and shadow, etc., follow on top layer of the powder. Nevertheless, color makeup transfer is still a challenging task as almost all of the existing techniques undergo with a number of technical and computational disadvantages. Thus it has been a contemporary focus of research to successfully transfer the color makeup in digital images. ∗ corresponding author. E-mail addresses: asad@mail.ustc.edu.cn, asadustc@gmail.com (A. Khan). https://doi.org/10.1016/j.ijleo.2017.12.047 0030-4026/© 2017 Elsevier GmbH. All rights reserved.
  • 3. A. Khan et al. / Optik 158 (2018) 590–601 591 Fig. 1. Digital makeup from internet images. (a) A subject image, taken by a common user. (b) Example style images, download from Internet. (c) The result of our approach, where foundation effect, eye style, eyebrow style, hair style, skin style and lip highlight in (b) are successfully transferred to (a). After entering in a beauty salon, the customer usually selects an artistic example image from an available catalog of example images and tells the makeup artist to apply the same makeup on her. It would be more convenient for the user to select more than one example images of her choices from the examples catalog as it would allow her to choose different parts of the face from different example images. Before actually applying the selected task, it would be quit helpful if she is able to preview the same makeup style to her own face. However, it is much time-consuming. Occasionally, the customer has two choices for trying out makeup, of which one is to try the makeup physically which is much time-consuming and requires the customer to be patient. One of the possible way is to try the makeup digitally by ways of digital photography by using different photo editing environments, such as Adobe Photoshop TM [1]. Besides, an online commercial software, Taaz [3], provides users with virtual makeup on face photos by simulating the effects of specified cosmetics. But then using such type of method is usually tedious and relies heavily on the personal expertise and efforts of the user. This paper deals with the problem of creating makeup upon a face image (Fig. 1(a))with the prototype of multiple images (Fig. 1(b))as the style examples. This is very practical while applying it in the scenario of the beauty salon. Our approach is inspired by the process of physical makeup. We present an approach of creating makeup upon a face image with the prototype of another image as the style example. In transferring face makeup between images, we have the following challenges. First, such a technique must maintain the correspondences between meaningful image regions in an automatic way. Secondly, for novice users, the pipeline should be intuitive and user friendly. Thirdly, an efficient technique to optimize makeup consistency of a collection of images depicting a common scene. The generation of automatic Trimap is another challenge as almost all of the existing techniques require a user to input a Trimap manually. We propose an approach which transfers the face makeup from internet images of different types taken by professional artists while taking advantage of high level semantic information. Our method can be applied by user to retouch his photo to acquire an exquisite visual style image. In addition, we present an approach based on matrix factorization to optimize consistency for multiple images. In order to achieve such a goal, we use sparse correspondence obtained from multi image sparse local feature matching. In short summary, this article makes the following contributions: • a new face makeup transferring technique is presented which can transfer makeup between different regions of the subject image and multiple example images with the same facial semantic information, • we propose a new algorithm of automatic generation of Trimap for efficient synthesis of each facial semantic information, • a semantic style transfer technique which transfers the makeup and automatically optimize the makeup consistency.
  • 4. 592 A. Khan et al. / Optik 158 (2018) 590–601 More importantly, our proposed method does not require the user to choose the same pose, face matching and image size between subject and example images. We demonstrate high quality makeup consistency results on large photo collections of internet images. 2. Related work Not much work on digital makeup transfer has been done so far. The first attempt for the makeup transfer was made by Guo et al. [12]. It first decomposes the subject image and example image faces into three layers. Then, they transfer information following the layer by layer correspondence. It is considered to be a disadvantage that it requires warp the example image to the subject face, which seems very challenging. The work of Scherbaum et al. [26] makes use of a 3D morphablle face model to conduct facial makeup. The main disadvantage to their technique is that it requires the before-after makeup face pairs of the same individual, which is difficult to accumulate in real application. Tong et al. [28] propose a “cosmetic transfer” procedure to realistically transfer the cosmetic style captured in the example-pair to another person’s face. The limitation of the applicability of the system occurs when requiring subject and example image face pairs. The automatic makeup recommendation and synthesis system proposed by Liu et al. [19] is another work on digital makeup. They called their recommendation beauty e-expert. But their proposed technique is considered to be in the recommendation module. By summing up this discussion, our proposed technique enhances the applicability and is more flexible to the requirements of digital makeup methods, thus consequently generates more attractive and eye-catching results. There is another method proposed by Ojima et al. [24]. It also use the concept of subject-example images. But they have only discussed the foundation effect in their work. If we study the contrast, the work by Tsumura et al. [30] proposed a physical model to extract hemoglobin and melanin components. They simulated the changes in physical facial appearance by adjusting the quantified amount of hemoglobin and melanin. They addressed the effects including tanning and reddening due to cosmetic effects, aging and alcohol consumption. Whereas the cosmetic effects they have used are much limited, and demonstrate much simplification than the real makeup. Nevertheless, there is an online commercial software called, Taaz [3], which accommodates the users while simulating the effects of particular cosmetics and provides a virtual makeup on facial photos. It is also an interesting topic to focus on beautification of face photos. An automatic face photo retouching technique is proposed by Brand and Pletscher [4] aiming to detect and remove flaws, moles, and acne from faces. Another interesting work by Leyvand et al. [17] introduced a technique of modifying face structure to enhance the attractiveness with a disadvantage of loosing the identity of the main subject. The idea of image processing by example can be found in image analogies [14]. Image analogies provide a general framework of rendering an image in different styles. This idea was used in work by Tong et al. [29] and Yang et al. [31]. Recently, there is a subsequently successful fashion analysis work obtained by deep learning [20,18]. There is a recent work on this topic by Dosovitskiy et al. [8,7]. They generate images of different objects of given type, viewpoint and color by using a generative CNN. The work by Simonyan et al. [27] generates an image which captures a net and visualizes the notion of the class. There is a general framework to invert both hand-crafted and deep representations to the image provided by Mahendran et al. [22]. Gatys et al. [9] contribute a neural algorithm of artistic style based on the CNN. Goodfellow et al. [10] presented a generative network of adverse nature and comprises two components; a generator and a discriminator. Without using obvious artifacts, the generated image is much neutral. A much simpler and relatively faster method of generating adversarial example image was provided by Goodfellow et al. [11]. They mainly focus on enhancing the CNN training instead of image synthesis. The so-called Deep Convolution Inverse Graphics Network proposed by Kulkarni et al. [16] learns an interpretable representation of images for 3D rendering. There is a considerable disadvantage by all existing deep methods that they generate only single image. Apart from these facts, we mainly focus on how to generate a new image comprising the nature of two subject images. 3. Digital makeup 3.1. Face database There are bulk of image available on Internet produced by professional photographers taken with high resolution cameras. It has always been an interesting problem to generate an image of exquisite nature by using those Internet professional artistic photos by a simple image processing operation. Therefore, it is needed to use a database where these images are stored with their semantic information. We segment every image of the database by using simple matting techniques while detecting key point of human face of different facial characteristics. There are many face detection algorithms to identify a human face in the scenes comprising easier and harder ones. In this article, we utilize the API provided by the [2] for face detection. It uses the cutting-edge technology of computer vision and data mining to provide face detection, face recognition, and face analysis. The API landmark which is utilized can detect the key points of a human face robustly. The landmark API detect the position of the facial contours, facial features and other key points. Our approach detects 83 key points in the face which are depicted in Fig. 3.
  • 5. A. Khan et al. / Optik 158 (2018) 590–601 593 Fig. 2. Some results with different facial styles by considering a single subject image and a number of multiple example images. Fig. 3. we connect the 83 key points on the face based on face detection, then the facial semantic information outline can be obtained. 3.2. Face semantics outlines We can focus on the human face semantic analysis. In a certain order, we connect the 83 key points on the face based on face detection, then the facial semantic information outline can be obtained. The landmark entry stores the key points of the various parts of the human face, including eyebrows, eyes, mouth, chin, face and so on. Each part has some points, and the points are represented by the coordinates using x and y. Using these key points, we connect them in a certain order and then we get the contour of the face. The face semantic information outline is given in Fig. 3. 3.3. Matting of face semantics A commonly used approach to extract semantic information is the Mean-Shift image segmentation technique [6]. How- ever, it will produce unwanted hard boundaries between semantic regions. We employ the image matting technique to obtain semitransparent soft boundaries. Here, we implement our automatic matting based on their matting technique by taking advantage of our generated TriMap. Existing natural matting algorithms often require a user to identify background, foreground, and unknown regions using a manual segmentation. However, constructing a suitable TriMap is very tedious and time-consuming. Sometimes, the inaccuracy in TriMap will lead to a poor matting result. In order to solve the problem mentioned above, we expand the contour of facial semantic information using an expansion algorithm. After distinguishing the foreground, background and unknown region by different colors (we set foreground to white, background to black, and the unknown region to gray), we can obtain a corresponding TriMap for an image. The mathematical expression for the expansion algorithm is: It(x, y) = max(x,y):element((x,y)) / = 0Is(x + x , y + y ) (1)
  • 6. 594 A. Khan et al. / Optik 158 (2018) 590–601 Fig. 4. Trimap in the leftmost image while transparent image and synthetic image are the medal and right respectively. Consequently, the matting image is computed with our automatically generated TriMap Chen et al.[5]. The transparent image and the synthetic image are shown in Fig. 4. The automatic matting approach is also applied in subject images to obtain the basic semantic segmentation. 3.4. Semantics makeup style transfer In order to transfer the makeup style of the example image to the subject image, we utilize a useful technique by Nguyen et al. [23] to alter the makeup style of all the semantic regions of the input image. So far this technique is unique in its consideration of scene illumination and the constraint that the mapped image must be within the makeup style gamut of the example image. The mathematical equation is Li = C−1 e (Cs(Ls)), (2) where Ls, Li and Le are the subject luminance, intermediate luminance and example luminance respectively. Cs and Ce are the cumulative histogram of Ls and Le respectively. Next, the output luminance Lo is obtained by solving the following linear equation. [I + (GT x Gx + GT y Gy)]L0 = Li + (GT x Gx + GT y Gy)Ls, (3) where I is the identity matrix; Gx, Gy are two gradient matrices along x, and y direction; is a regularization parameter. Once we have transferred the color in the corresponding semantic regions, we make use of the alpha-blending scheme to composite the whole portrait image. The alpha channel of each pixel is computed with the aforementioned image matting technique. Fig. 5 shows the process of color transfer between the corresponding semantic information of the subject image and the example image. 3.5. Makeup consistency optimization The last step of our algorithm is the makeup consistency. It is an important aspect in makeup style as the image does not seem impressive without it. We apply it once we have transferred the makeup style optimally. In order to achieve consistent appearance between multiple example images and resulting image, we adopt a correction model whose applicability is global on makeup with description of reasons in [13], namely robustness to alignment errors, ease of regularization and higher efficiency due to less unknown parameters. We only solve makeup correction model for our algorithm. This simple model is as follows: Rs = (aRf ) , (4) where Rs is the input image after the alpha blending, Rf is the required final image, a is a scale factor equivalent to the white balance function [15] and (.) is the non-linear gamma mapping. Rfi(xij) = (aivij) i, (5) where ai and i are the unknown global parameters for the ith image. The vij represented the pixel error captures un-modeled color variation due to factors such as dark and light color change.
  • 7. A. Khan et al. / Optik 158 (2018) 590–601 595 Fig. 5. In step 1, we specify the subject and example images, in step 2, we take semantic information after face detection, in step 3, we extract the semantic information by using matting algorithm, in step 4, we separate all parts of given sematic information, in fifth step, we get resulting image by using alpha blending and in the final step, we apply makeup consistency to obtain our result with optimized color. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of the article.) In Eq. (5), taking logarithms on both side and decomposing in matrix form, by inputting the image intensities by scene point into sparse column vectors of length m and stacking the n columns side by side, we get: R = A + V. (6) Here, R ∈ Rm×n is the observation matrix, where each entry R = log(Rf(xij)). A ∈ Rm×n is the color coefficient matrix where Aij = ijlog aij. The rank of A is 1 because of a is global parameter for an image, ai1 = ai2. . .= ain.Thus, Each row of A is identical. And V ∈ Rm×n is the residual matrix where Vij = ij log vij. 4. Results and discussion In our method, one can use more than one example images which makes it significant enough. As sometimes the user likes some parts of the face of one image and other parts of some other image, our technique allows the user to use different example images. This gives the more natural results with better visual effects. The semantic analysis of the subject image by using our interactive tool takes about 2 s, and the makeup style transfer step takes about 1 s with Matlab2014a on a PC with an Intel(R) Core (TM) i5-4690 CPU, 3.50 GHz processor and 8GB RAM under Windows OS. In Fig. 5, the pipeline to our technique is exhibited while explaining the different steps in a sequence. Our method comprises simple and user-friendly steps to get completed and produces the eye-catching results with better visual effects. Once we specify the subject image with a number of example images in first step, we take semantic information after face detection which is automatically done by our algorithm. In next step, it makes use of the matting algorithm after extracting the semantic information. In second last step, we get resulting image while using the alpha blending and in the final step, we obtain our final result with optimized makeup after applying makeup consistency. Furthermore, our main focus is inside-eye rather than eye shadow which produces more natural results. As it is contempo- rary to use lenses inside the eyes to change colors of inside-eye, our method focuses entirely on transferring eye style inside eyes and on hairs while preserving the boundary effectively. It is also important to remark that all the existing techniques do not employ makeup consistency and use single example image. We focus on using multiple example image rather than single image and employ makeup consistency which significantly enhances the practicability of our method. Consequently, we obtain more natural contemporary results as compared with already existing techniques which only focuses on lips and eye shadows without proper boundary preservation. The qualitative results are shown in Fig. 6. The first column shows the subject images whereas the second column shows the corresponding example images. The results by Guo et al. [12], Gatys et al. [9], Liu et al. [21] and ours are listed in the third, fourth, fifth and last columns respectively. First we discuss the comparison with Guo et al. [12]. Although the results by Guo et al. [12] looks natural but there are number of gaps which should be removed. For example, Guo et al. [12] always transfers much lighter makeup then the example face. Since the lip gloss in the example is dark red whereas the lip gloss by Guo et al. [12] is orange-red. Our method produces more natural results where the lip gloss is very close to dark-red in the example image. Moreover, Guo et al. [12] can only produce very light eye shadow even though their focus is eye shadows. Compared with Gatys et al. [9], our subject faces contain much less artifacts. It is because our makeup transfer is conducted between the local regions, such as lip vs lip gloss while the Gatys et al. [9] transfer the makeup globally. Global makeup
  • 8. 596 A. Khan et al. / Optik 158 (2018) 590–601 Fig. 6. Qualitative comparisons between the state-of-the-arts and ours.
  • 9. A. Khan et al. / Optik 158 (2018) 590–601 597 Fig. 7. Comparison with the results of Nguyen et al. [23] and Reinhard et al. [25]. Fig. 8. Some results of side pose where the example images are not needed to be of same pose or of same style. The example images are not specific to different results. The texture in the background of the result in first row is preserved, also the background of the result in second row is preserved where it comprises a combination of colors. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of the article.) transfer suffers from the mismatch problem between the image pair which clearly shows the disadvantage of the method by Gatys et al. [9]. Liu et al. [21] focuses on eye shadows and lips whereas skin color remains unaffected. But the eyes shadows of both eyes are not same in the resulting images which clearly is a mismatch and records a disadvantage. Moreover, the boundary on the lips and eyes is not preserved. Our method focuses on inside-eyes, lips, skin and hairs while keeping the boundary well-preserved. This produces more attractive results of exquisite nature. In Fig. 7, a comparison of our method with the techniques by Reinhard et al. [25] and Nguyen et al. [23] is made while giving their subject and example images. We want to compare our method to these color style techniques, because we have transferred the makeups style using color style technique so we compare the some color style technique and show our result efficiency. In the result by Reinhard et al. [25] it is clearly seen that the color is not transferred properly resulting in a blur and imbalanced image. Because the color transfer algorithms in [25] adopt the method for calculating the mean and
  • 10. 598 A. Khan et al. / Optik 158 (2018) 590–601 Fig. 9. A drawback to our method: The image in first column shows a failure due to lack of semantic constraints and the image in second column shows problem caused by lack of clarity of face information. variance, so the color spatial distribution in a example image is mean to each pixel of each channel in a subject image. In contrast, our method transfers the color style between the corresponding semantic information, so that we can gain the spatial distribution of the example image and keep the strong visual coherence. They were not able to control the color and boundary is not preserved as well. In the result by Nguyen et al. [23], the color is transferred but color balancing is not properly performed, which results a darkness in right side of the cheek in the image. Also they not transferred the semantic color style e.g. lips, eye, hair etc. Whereas in our result the color is transferred properly with efficient boundary preservation while the background color is also transferred efficiently. Thus our method clearly fills the gaps which were limitations to the existing techniques.
  • 11. A. Khan et al. / Optik 158 (2018) 590–601 599 Fig. 10. A result with the subject image in the leftmost column and resulting images in second, third and fourth columns with respective example images. 4.1. More results on makeup transfer In Fig. 1, an application of our method, on a subject image while considering a number of example images, is presented. We consider four example images to transfer the makeup style in a subject image and the resulting image shows the result after application of our technique. It is not customary to use multiple example images in earlier work on this topic. Nevertheless, we make use of the multiple example images which enhances the efficiency of this work. In Fig. 2, a number of results of efficient makeup transfer with different facial styles are presented. A single subject image is considered and a variety of results, while considering four example images, are presented which shows the diversity of our technique. In Fig. 8, we apply our algorithm to some images with side pose depiction. We see that our technique is also suitable for side pose images and it produces the results of same quality as for other images. Moreover, the texture from the background of the result in first row is preserved well and the background of the result in second row where it is a combination of
  • 12. 600 A. Khan et al. / Optik 158 (2018) 590–601 Fig. 11. Multi-pronged application of our method. different colors is also preserved efficiently. The main purpose of this experiment is to extend the limitation of our method from normal images to side pose images. We show that our method works for these images types as well and produce the results of same quality. In Fig. 10, some results of our technique are presented. As our method is not restricted to one example image only, the results are given with multiple example images. Our technique transfers the makeup of different parts of the face like hairs, lips and eyes etc. in the subject image by taking semantic information of similar nature from multiple example images. Our technique transforms an image of low characteristics to an artistic and exquisite image in just a few steps. The results in Fig. 10 shows the efficiency of our method as the results are relatively better than of existing techniques. In Fig. 11, shows some more results of our proposed method using multiple example images. They all show the makeup- transferred results that reflect the example colors to the subject images effectively. Moreover, the makeup preservation in the resulting image is focused and tackled successfully. We consider images with different poses and show the effective applicability of our method on these image types. Fig. 11, shows that our method can successfully transfer the makeup in subject images with different styles and types e.g. side pose, back pose and other poses which are normally tough to tackle. One can choose a suitable combination of makeups which seem more attractive with respect to different poses and styles. Limitation: The limitations of our technique are exhibited in Fig. 9. In the image on left side, the face skin color matches with the background color, therefore our algorithm does not detect the face and thus unable to extract the semantic infor- mation. In the image on right side, the face information is not clear and therefore leads to a failure where the algorithm is not able to detect the face and hence lacks extracting the facial semantic information. 5. Conclusion We have presented a system for stylizing possible appearances of a person from a single or more photographs. We have proposed a new semantic makeup transfer technique between images to efficiently change the makeup style of images. In just few steps, our proposed framework can transform a common image of low characteristics to an exquisite and artistic photo. The user is not required to select subject and example images with face matching, as our method can make use of the multiple example images. The broad area of applications of our technique includes high level facial semantic information and scene content analysis to transfer the makeup between images efficiently. Moreover, our technique is not restricted to head shot images as we can also change the makeup style in the wild. The advantage of using multiple example images is to choose your favorite makeup style from different images as you are not restricted to choose the makeup style from a single image. A number of results are presented in different styles and conditions which shows the ubiquitousness and diversity of our method to industrial scale. While minimizing manual labor and avoiding the time-consuming operation of semantic segmentation for the example image, the framework of our proposed method can be broadly used in film post-production, video-editing, art and design and image processing. Acknowledgements This work is supported by the National Natural Science Foundation of China (61672482, 11626253) and the One Hundred Talent Project of the Chinese Academy of Sciences. References [1] Adobe photoshop (2016). http://www.adobe.com/products/photoshop.html. [2] Face++ (2016). https://www.faceplusplus.com/. [3] Taaz (2016). http://www.taaz.com/. [4] M. Brand, P. Pletscher, A conditional random field for automatic photo editing, in: Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, IEEE, 2008, pp. 1–7.
  • 13. A. Khan et al. / Optik 158 (2018) 590–601 601 [5] Q. Chen, D. Li, C.-K. Tang, Knn matting, IEEE Trans. Pattern Anal. Mach. Intell. 35 (9) (2013) 2175–2188. [6] D. Comaneci, P.M.M. Shift, A robust approach toward feature space analysis, IEEE Trans. Pattern Anal. Mach. Intell. 24 (5) (2002). [7] A. Dosovitskiy, T. Brox, Inverting convolutional networks with convolutional networks, CoRR abs/1506.02753 (2015). [8] A. Dosovitskiy, J. Tobias Springenberg, T. Brox, Learning to generate chairs with convolutional neural networks, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015) 1538–1546. [9] L.A. Gatys, A.S. Ecker, M. Bethge, A Neural Algorithm of Artistic Style, 2015 arXiv preprint arXiv:1508.06576. [10] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, Generative adversarial nets, Advances in Neural Information Processing Systems (2014) 2672–2680. [11] I.J. Goodfellow, J. Shlens, C. Szegedy, Explaining and Harnessing Adversarial Examples, 2014 arXiv preprint arXiv:1412.6572. [12] D. Guo, T. Sim, Digital face makeup by example, in: Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, IEEE, 2009, pp. 73–79. [13] Y. HaCohen, E. Shechtman, D.B. Goldman, D. Lischinski, Optimizing color consistency in photo collections, ACM Trans. Graph. (TOG) 32 (4) (2013) 38. [14] A. Hertzmann, C.E. Jacobs, N. Oliver, B. Curless, D.H. Salesin, Image analogies, in: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, ACM, 2001, pp. 327–340. [15] E. Hsu, T. Mertens, S. Paris, S. Avidan, F. Durand, Light mixture estimation for spatially varying white balance, in: ACM Transactions on Graphics (TOG), vol. 27, ACM, 2008, p. 70. [16] T.D. Kulkarni, W.F. Whitney, P. Kohli, J. Tenenbaum, Deep convolutional inverse graphics network, Advances in Neural Information Processing Systems (2015) 2539–2547. [17] T. Leyvand, D. Cohen-Or, G. Dror, D. Lischinski, Data-driven enhancement of facial attractiveness, in: ACM Transactions on Graphics (TOG), vol. 27, ACM, 2008, p. 38. [18] X. Liang, S. Liu, X. Shen, J. Yang, L. Liu, J. Dong, L. Lin, S. Yan, Deep human parsing with active template regression, IEEE Trans. Pattern Anal. Mach. Intell. 37 (12) (2015) 2402–2414. [19] L. Liu, J. Xing, S. Liu, H. Xu, X. Zhou, S. Yan, Wow! You are so beautiful today!, ACM Trans. Multimedia Comput. Commun. Appl. (TOMM) 11 (1s) (2014) 20. [20] S. Liu, X. Liang, L. Liu, X. Shen, J. Yang, C. Xu, L. Lin, X. Cao, S. Yan, Matching-CNN meets KNN: Quasi-parametric human parsing, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015) 1419–1427. [21] S. Liu, X. Ou, R. Qian, W. Wang, X. Cao, Makeup Like a Superstar: Deep Localized Makeup Transfer Network, 2016 arXiv preprint arXiv:1604.07102. [22] A. Mahendran, A. Vedaldi, Understanding deep image representations by inverting them, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015) 5188–5196. [23] R. Nguyen, S. Kim, M. Brown, Illuminant aware gamut-based color transfer Computer Graphics Forum, vol. 33, Wiley Online Library, 2014, pp. 319–328. [24] N. Ojima, K. Yoshida, O. Osanai, S. Akasaki, Image synthesis of cosmetic applied skin based on optical properties of foundation layers, Proceedings of International Congress of Imaging Science (1999) 467–468. [25] E. Reinhard, M. Adhikhmin, B. Gooch, P. Shirley, Color transfer between images, IEEE Comput. Graph. Appl. 21 (5) (2001) 34–41. [26] K. Scherbaum, T. Ritschel, M. Hullin, T. Thormählen, V. Blanz, H.-P. Seidel, Computer-suggested facial makeup Computer Graphics Forum, vol. 30, Wiley Online Library, 2011, pp. 485–492. [27] K. Simonyan, A. Vedaldi, A. Zisserman, Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps (2014), 2013 arXiv preprint arXiv:1312.6034. [28] W.-S. Tong, C.-K. Tang, M.S. Brown, Y.-Q. Xu, Example-based cosmetic transfer, in: Computer Graphics and Applications, 2007. PG’07. 15th Pacific Conference on, IEEE, 2007, pp. 211–218. [29] W.-S. Tong, C.-K. Tang, M.S. Brown, Y.-Q. Xu, Example-based cosmetic transfer, in: Computer Graphics and Applications, 2007. PG’07. 15th Pacific Conference on, IEEE, 2007, pp. 211–218. [30] N. Tsumura, N. Ojima, K. Sato, M. Shiraishi, H. Shimizu, H. Nabeshima, S. Akazaki, K. Hori, Y. Miyake, Image-based skin color and texture analysis/synthesis by extracting hemoglobin and melanin information in the skin, ACM Trans. Graph. (TOG) 22 (3) (2003) 770–779. [31] Y. Yang, H. Zhao, L. You, R. Tu, X. Wu, X. Jin, Semantic portrait color transfer with Internet images, Multimedia Tools Appl. 76 (1) (2017) 523–541. View publication stats View publication stats