SlideShare a Scribd company logo
1 of 11
Download to read offline
Detection of Lie by Involuntary Physiological
Phenomena using Distance Camera
Hirotomo Kato, Isao Nishihara, Hironari Matsuda, and Takayuki Nakata
Department of Electrical and Computer, Faculty of Engineering
Toyama Prefectural University
Imizu City, Toyama, 939-0398 Japan
E-Mail : nishihara@pu-toyama.ac.jp, nakata@pu-toyama.ac.jp
Abstract— In this paper, we verify whether there is a change in the brightness value of the face when a person lies, using a distance camera
capable of extracting feature points of the face. We propose a LieCount value showing characteristic luminance fluctuation. We
constructed the lie detection algorithm using these values. There was a significant difference between the LieCount value when lying and
more than half of examinees when LieCount value when not lying, confirming that there is a possibility of constructing the lie detection
system.
Keywords-component; Lie detection; Distance Camera; Digital Image Processing; LieCount;
I. INTRODUCTION
In recent, communication between people is important. Grasp of human intention in communication is very important. In order
to contact people well, in our daily lives we discriminate between lies and joke consciously or unconsciously from the change in
expression and voice pitch accompanying utterances.
It is a very important ability to understand hidden nuances that are not spoken in such human conversation. Recently, in order to
make a robot and a human being to talk smoothly, research has been made to make a machine distinguish human emotions. The
research result has been put to practical use as a part of products, and it is possible for ordinary people to purchase it. (Figure 1)
Figure 1. Example of a humanoid robot for communication
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 9, September 2017
172 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
The robot in the figure is called “pepper” and has many sensors such as an ultrasonic sensor, an auditory sensor, an RGB camera,
a range camera, and etc. Human voice and expression are acquired from these sensor values, and emotion is estimated from these
information. When people are happy they are pleased, and when people are sad they try to cheer. In this way, pepper remains only to
recognize the human emotions, but the machine does not understand lies and joke. It is possible to realize sophisticated
communication just like a human being if it can solve it. Therefore, in such a system, in order to realize a robot that can conduct a
conversation like a human being, a system for automatically detecting a lie by a machine is necessary.
In this paper, in order to realize a robot capable of sophisticated conversation, without using a machine which attach to a human,
by using unconscious physiological information, it is possible to automatically detect lies for an examinee to intentionally not hide
clues of lies. If it becomes possible to detect lies automatically without wearing a machine, it can be expected to detect multiple lies
by one camera.
The conventional lie detection method is based on the premise that a contact type sensor is mainly attached to a person. As an
example of these methods, polygraph examination is cited, and at this time, respiration, blood pressure, pulsation, skin electrical
activity, etc. are used as judgment materials[1].
As a lie detection method using a non-contact type sensor, there is a method of using a microphone and a camera to judge by
using gaze, facial expression, and prosody[2]. However, there is a problem that examinees of these lies can be hidden intentionally.
The following physiological phenomena are said to be unable to intentionally hide when a person lies as a general theory[3].
· Flushing of the face · pallor
· Change in respiration
· Heartbeat fluctuation
· Sweating
· Expansion of pupil diameter
The flushing of the face is thought to be caused by an increase in tension due mainly to lying, an increase in blood flow, and
physiologically well explained explanation. In this paper, we decided to use the cheek color corresponding to the flushing of the face
as the information which Kinect can obtain most conveniently.
II. LIE DETECTION METHOD USING KINECT
A. Face position detection
'Kinect' camera is operating distance camera and RGB camera at the same time. ‘Kinect Face Tracking SDK’ is used to obtain
the red brightness value of the face. A depth value is acquired using a distance camera, and characteristic points of the face are
acquired from the depth value. Many of the feature points to be acquired are scattered and concentrated on a part where the move
is large among human faces such as nose, eyes, mouth and so on. Red luminance values can be acquired from as many as 108 feature
points in total. The state of 108 feature points is shown in Figure 2. In the figure, detected feature points in the face are
simultaneously drawn. Figure shows the result of actually tracking the head, even if the examinee moves the head. The luminance
value of the part to be acquired can be obtained.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 9, September 2017
173 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
Figure 2. Example of a figure caption. (figure caption)
B. Picking up color
When outputting the luminance data of the feature point by only one pixel data, there are cases where measurement noise or
noise arising from characteristic point shift may appear. Due to these causes, large data variations may occur. Therefore, noise is
reduced by averaging brightness values in a certain range around feature points. In this method, the average of the red luminance
values in 5 × 5 pixels area, is output as the red luminance value with the feature point as the center as shown in figure 3. The
calculation formula of the average of the red luminance values is shown in equation (1). In this equation, R is the luminance value
of Red, x is the coordinate of the horizontal pixel, y is the coordinate of the vertical pixel, and  ,R x y is the red luminance
value of the position of the pixel of the feature point.
Figure 3. Range around the detection point
22
2 2
( , )
25
yx
ave
i x j y
R i j
R

   
     
C. Band pass filter (BPF)
Consider removing the change of the luminance value other than the change of the luminance value which changes by lying as
noise. In fact even in preliminary experiments, it was confirmed that fine measuring noise fluctuating about 1 in luminance value
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 9, September 2017
174 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
exists. Also, due to the nature of the examinees, humans unconsciously move the body due to respiration or pulse, which may cause
noise. We also consider removing these noises. Table 1 summarizes what is considered noise.
TABLE I. PROPERTIES OF NOISES TO BE CONSIDERED
Noise factor Period(sec) Frequency(Hz)
Measurement noise <1 >1
Heartbeat noise 1-2 0.5-1
Breath noise 3-5 0.2-0.3
Long-cycle noise >20 <0.05
In order to remove all these noises, a band pass filter that transmits only the frequency component between 0.05 Hz and 0.2 Hz
was applied. In fact, aliasing distortion due to the sampling frequency of 60 Hz occurs, so that frequency components between 29.8
Hz and 29.95 Hz were also generated. Finally, the band pass filter ( )B f is shown in calculation equation (2). In this equation, f
is frequency of input signal.
1
( )
0
(0.05 0.2&29.8 29.95)
( 0.05&0.2 29.8& 29.95)
f f
f
B
f
f
f
   
 

 



 
The results after passing the measured original data and BPF are shown in figure 4. The horizontal axis of the figure is time and
the vertical axis is luminance value of red. The red line is the original data, and the blue line is the results after passing through BPF.
Figure 4. Example of a figure caption. (figure caption)
D. Select using points
As a preliminary experiment, processing up to this point was performed for all 108 parts of face parts positions detected. As a
result, the four points of eye holes (eye socket), right cheek, left cheek and jaw, which are shown in figure 5, have the largest
dispersion of R luminance values. Then only these four points were used afterwards. These points numbers are No. 24, 39, 74, and
103, respectively.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 9, September 2017
175 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
Figure 5. Four feature points for picking up luminance R.
E. Estimating LieCount
In this section, we construct a system to judge whether or not the person is lying. We propose a new indicator for lie detection
called ‘LieCount’. It can be confirmed that the red luminance value is lower when the person is lying than when not lying. For this
time, we aim to detect lies using this property. However, although it is ideal that it is possible to output the detection result of the
lie for each frame in real time, since the luminance value is acquired by using the camera, there is a problem that measurement noise
will inevitably occur every frame. That is, if it is tried to judge whether it is a lie or not for each frame, there is a possibility that the
lie detection result will be greatly changed by the measurement noise. Therefore, since the number of frames for one section is 600
frames, a lie is detected by utilizing the opportunity to judge whether it is 600 times or not. That is, if the red luminance value of
one unit in one unit is lower than the average of the red luminance value of one unit out of the 600 frames, when the number of the
section is less than the number of the sections, the examinee lies is decided to have a high probability of being attached. Therefore,
the higher the value of LieCount, the higher the likelihood that the examinee is lying. Below is the procedure of detection of lie.
First, we add up and average 30 sections of lie data by equation (3), and 120 sections of true data by equation (4).
30
1
(( ) )
( ) (f=1,2, ,600)
30
l i f
LieAve f
i
R
R

    
120
1
(( ) )
( ) (f=1,2, ,600)
120
t j f
TruthAve f
j
R
R

   
The difference between the absolute value of the truth data and the absolute value of the lie data obtained by addition and
averaging by equation (5) is taken and a value obtained by averaging the difference value by equation (6) is taken as the weight of
the feature point.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 9, September 2017
176 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
(R ) ( ) ( )D f TruthAve f LieAve fR R   
600
1
( )
600
D f
Weighting
f
R
R

   
Table 2 shows the calculation result of the weight of each feature point by the above defined equations.
TABLE II. WEIGHT OF EACH FEATURE POINT
FeaturePoint
No.24 No.39 No.74 No.103
WeightingR 1.09 0.79 0.71 0.83
Luminance values for one unit are averaged by equation (7)
2 3 4 5
1
( ) {( ) ( ) ( ) ( ) ( ) }
5
Ave f Lie f Truth f Truth f Truth f Truth fR R R R R R      
If the luminance value of one section is smaller than the average of luminance values of one unit ( )Ave fR ,
.24 .39 .74, ,No No NoD D D and .103NoD are set to 1.
, 2, 3, 4, 5
.24,39,74,103
, 2, 3, 4, 5
1 ( ) ( )
0 ( ) ( )
Lie Truth Truth Truth Truth f Ave f
No
Lie Truth Truth Truth Truth f Ave f
if R R
D
if R R




 
Finally, we sum by the weighting factor in Table 2 and calculate LieCount according to equation (9).
600
.24 .39 .74 .103
1
(1.09 ( ) 0.79 ( ) 0.71 ( ) 0.83 ( ) )No f No f No f No f
f
LieCount D D D D

          
Finally, if LieCount is a large value, it is judged to be a lie.
III. EXPERIMENT AND EVALUATION
Sit on the chair for the examinee and conduct the experiment. The distance between the camera and the examinee is about 1.5 m.
Because the head tracking is done with Kinect, even if the examinee moves the head, it is possible to acquire the luminance value,
but if it moves greatly, the detected points shift and the luminance value greatly fluctuates. For this time, since we focus on whether
it is possible to detect a lie from the luminance value data obtained from the camera, we decided to fix the face on the stand. Therefore,
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 9, September 2017
177 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
the chin is placed on the stand, and the examinee carries out the experiment by placing his / her head on this chin rest. The environment
of the experiment is shown in figure 6.
Figure 6. Experiment environment
In this paper, we conducted an experiment with reference to the method of card inspection of polygraph inspection. The
experimental procedure is shown below.
1. Prepare a total of 5 actual playing cards, "Clover 1", "Heart 4", "Heart 7", "Clover 8", "Diamond 10".
2. The examinee pick one random card from the actual cards.
3. Randomly present cards on the display (an example of the cards displayed on the display in figure 7).
4. The card is showed to the examinee for 20 seconds.
5. The recorded question "Is this card the card you selected?" is played on the speaker.
6. The examinee always answer "no".
7. Repeat the above step 4-6 for the required number of cards.
Figure 7. Example of presentation card
Of the five prepared cards, we define "cards of lies" for cards picked up by the examinees, and four other cards as "cards of truth".
In addition, we chose "Lie's card" and "Truth card" to present so that the numbers and marks will vary. The reason for this is that
the examinees misunderstand if they have the same numbers or the same marks. At the beginning of the experiment, six pieces of
cards not related at all to experiments which were not "cards of lies" and "cards of truth" were presented to the display continuously.
In this research, we refer to cards that have no relation to this experiment as dummy card to cool down. The reason for presenting
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 9, September 2017
178 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
this card is that in the case of examinee who are not accustomed to experiments, the physiological response is not stable from the
tension at which the experiment starts. Even in an actual polygraph examination, we are doing some questions first to calm the
examinee. In this research, this is defined as ‘1 unit’ with one card of lie and 4 cards of truth. Between the unit and the unit, 1
dummy card is inserted. The reason for presenting is to prevent the influence of the previous unit from continuing to the next unit.
One experiment with all units is called ’1 set’ in this paper. The playing procedure of the cards is shown in figure 8. In this figure,
green, blue, red cards mean dummy, true, lie cards, respectively.
Figure 8. Procedure for playing cards
Figure 9 shows the result of calculated LieCount for six examinees. Each figure shows the average value and variance of the
LieCount values in all the sets when lying or not lying. The first examinee No.1 is the same examinee for the preliminary experiment
conducted in the previous section.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 9, September 2017
179 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
(a) LieCount for examinee No.1 (b) LieCount for examinee No.2
(c) LieCount for examinee No.3 (d) LieCount for examinee No.4
(e) LieCount for examinee No.5 (f) LieCount for examinee No.6
Figure 9. Results of LieCount values
It means that the higher the value of LieCount calculated by the conventional method, the higher the likelihood of lying. As
expected, overall, it was confirmed that the value of LieCount when lying is larger. Especially, it is thought that examinee No.1
was the examinee of the preliminary experiment because the difference between LieCount when examinee No.1 lied and when
not lying was large. In examinee No.3, it is confirmed that LieCount is larger than that of Lie, and it can be confirmed that there
is a case that the red luminance value of the eye socket, the right cheek, the left cheek, and the jaw does not decrease even when
lying . Finally, there was no difference in the value of LieCount in examinee No.4, and it can be confirmed that there was no
change in the red luminance values of the examinee around the eye socket, the right cheek, the left cheek and the jaw as well in
this examinee. The average value of LieCount at the time when four of the six examinees lied led to a higher result and confirmed
the possibility of LieCount’s detection of a lie.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 9, September 2017
180 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
In this experiment, the variation of data was large, and as shown in the figure, the standard deviation was relatively larger
than the difference between True and Lie. This suggests that the LieCount value when not lying is likely to exceed the LieCount
value when lying. Therefore, it can be said that it is difficult to evaluate this lie detection system.
Evaluation by score was carried out in order to evaluate the possibility that the examinees could lying only once while five
sections in one unit. Among 5 trials of 1 unit, scores of 5, 4, 3, 2, 1 point are allocated in order from the highest value of LieCount,
and the scores of 30 units per examinee are averaging. In other words, the higher the Score, the more LieCount gets the higher
rank in each unit. The results are shown in Table 3.
TABLE III. AVERAGE SCORES IN ALL UNITS
Examinee Lie Truth
No.1 130 80
No.2 114 84
No.3 111 85
No.4 92 90
No.5 85 91
No.6 98 88
When trying to find a lie under the premise that only one out of five answers lies, it was suggested that the score of 5 examinee
including No.1 is high and the accuracy of lie detection is high. In addition, although only the test subject number 5 was the
opposite result, this result is different from FIG. This suggests that there is a possibility of correct lie determination by combining
these determinations.
When trying to detect a lie under the precondition that only one lie out in five answers, the detection of the lie of four persons
including No.1 is high, it was suggested that accuracy is possible. Also, although only examinee No. 5 has been reversed result,
this result is different from Figure 9. This suggests that combining these determinations can make a correct lie determination.
IV. SUMMARY
In this paper, by using the distance camera with the RGB camera, the fundamental recognition technology of the lie detection
was examined by the RGB information at the flesh color position specified by the distance camera.
In the future, we plan to consider the lie detection method when there is no lie or when the number of times of lying is plural.
In addition, we aim to detect lies using various parameters such as flushing / pallor, pulsation of the face, whose subjects are
said to be difficult to camouflage cryptic lies. Finally, we will investigate the lie detection method using multidimensional space
with multiple parameters, and further consider a neuron-like judgment method using Deep Learning.
REFERENCES
[1] Patrick, C. J., & Iacono, W. G. (1989). Psychopathy, threat, and polygraph test accuracy. Journal of Applied Psychology, 74(2), 347-355.
doi:10.1037/0021-9010.74.2.347
[2] Y. Ohmoto, K.Ueda, and T. Ohno "Real-time system for measuring gaze direction and facial features: towards automatic discrimination of lies using
diverse nonverbal information", AI & SOCIETY, Vol.23 Issue 2, pp.187-200 (2009), issn=1435-5655 doi:10.1007/s00146-007-0138-x
[3] Charles V. Ford, ”Lies! Lies!! Lies!!!: The Psychology of Deceit”, Amer Psychiatric Pub ISBN-13: 978-0880487399 (1996)
[4] S. Hamaki, S. Nakano, and I. Nishihara, “A Study on Human Motion Detection Method with Range Camera”, ITE Annual Convention 2011, 6-2 (2011)
(written in japanese, 邦題” 距離カメラを用いた人物の行動検出法の検討”) doi: 10.11485/iteac.2011.0_6_2
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 9, September 2017
181 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
[5] H. Kato, H. Matsuda, and T. Nakata “Method of estimation lies using distance camera to detect unconscious physiological phenomenon”, International
Workshop of Advanced Image Technology 2016 (IWAIT2016), 3C-6 (2016)
AUTHORS PROFILE
Hirotomo Kato received his B.E. and M.E., degrees from Toyama Prefectural University in 2015 and 2017 respectively. His interest icludes digital signal
processing.
Isao Nishihara received his B.E., M.E., and Ph.D. degrees in Physical Information Engineering from Tokyo Institute of Technology in 1995, 1997, and 2000
respectively. He is now the assistant professor in the Faculty of Engneering, at Toyama Prefectural University in Japan. His research interests include
digital video image processing, human interfaces, and virtual 3D world.
Hironari Matsuda received his B.S., M.S., and Ph.D. degrees in physics from the University of Tokyo, Tokyo, Japan, in 1976, 1978, and 1982, respectively.
Since he joined Hitachi Ltd. in 1982, he has been engaged in research and development on photonic transmission subsystems. In 2003, he joined Toyama
Prefectural University, and is currently a Professor with the Faculty of Engineering. His research interests include photonic transmission systems, photonic
access networks, and photonic switching systems.
Takayuki Nakata received his B.E., M.E., and Ph.D. degrees from Kanazawa University in 1998, 2001 and 2004 respectively. From 2002-2004 he was a special
research student in Yokohama National University. In 2004, he joined the Faculty of Engineering, Toyama Prefectural University, where he is currently
an associate professor. His research interests include recognition of 3D object, 3D display, etc.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 9, September 2017
182 https://sites.google.com/site/ijcsis/
ISSN 1947-5500

More Related Content

Similar to Detection of Lie by Involuntary Physiological Phenomena using Distance Camera

An Ear Recognition Method Based on Rotation Invariant Transformed DCT
An Ear Recognition Method Based on Rotation Invariant  Transformed DCT An Ear Recognition Method Based on Rotation Invariant  Transformed DCT
An Ear Recognition Method Based on Rotation Invariant Transformed DCT IJECEIAES
 
Analysis and Classification of Skin Lesions Using 3D Volume Reconstruction
Analysis and Classification of Skin Lesions Using 3D Volume ReconstructionAnalysis and Classification of Skin Lesions Using 3D Volume Reconstruction
Analysis and Classification of Skin Lesions Using 3D Volume ReconstructionIOSR Journals
 
Powerful processing to three-dimensional facial recognition using triple info...
Powerful processing to three-dimensional facial recognition using triple info...Powerful processing to three-dimensional facial recognition using triple info...
Powerful processing to three-dimensional facial recognition using triple info...IJAAS Team
 
Parameterized Image Filtering Using fuzzy Logic
Parameterized Image Filtering Using fuzzy LogicParameterized Image Filtering Using fuzzy Logic
Parameterized Image Filtering Using fuzzy LogicEditor IJCATR
 
F ACIAL E XPRESSION R ECOGNITION B ASED ON E DGE D ETECTION
F ACIAL  E XPRESSION  R ECOGNITION  B ASED ON  E DGE  D ETECTIONF ACIAL  E XPRESSION  R ECOGNITION  B ASED ON  E DGE  D ETECTION
F ACIAL E XPRESSION R ECOGNITION B ASED ON E DGE D ETECTIONIJCSES Journal
 
Fuzzy and entropy facial recognition [pdf]
Fuzzy and entropy facial recognition  [pdf]Fuzzy and entropy facial recognition  [pdf]
Fuzzy and entropy facial recognition [pdf]ijfls
 
Fuzzy and entropy facial recognition [pdf]
Fuzzy and entropy facial recognition  [pdf]Fuzzy and entropy facial recognition  [pdf]
Fuzzy and entropy facial recognition [pdf]ijfls
 
Iisrt2 dwarakesh(9 11)
Iisrt2 dwarakesh(9 11)Iisrt2 dwarakesh(9 11)
Iisrt2 dwarakesh(9 11)IISRT
 
Efficient Small Template Iris Recognition System Using Wavelet Transform
Efficient Small Template Iris Recognition System Using Wavelet TransformEfficient Small Template Iris Recognition System Using Wavelet Transform
Efficient Small Template Iris Recognition System Using Wavelet TransformCSCJournals
 
A mathematical model of movement in virtual reality through thoughts
A mathematical model of movement in virtual reality through thoughts A mathematical model of movement in virtual reality through thoughts
A mathematical model of movement in virtual reality through thoughts IJECEIAES
 
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...MangaiK4
 
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...MangaiK4
 
S IGNAL A ND I MAGE P ROCESSING OF O PTICAL C OHERENCE T OMOGRAPHY AT 1310 NM...
S IGNAL A ND I MAGE P ROCESSING OF O PTICAL C OHERENCE T OMOGRAPHY AT 1310 NM...S IGNAL A ND I MAGE P ROCESSING OF O PTICAL C OHERENCE T OMOGRAPHY AT 1310 NM...
S IGNAL A ND I MAGE P ROCESSING OF O PTICAL C OHERENCE T OMOGRAPHY AT 1310 NM...sipij
 
Detection of uveal melanoma using fuzzy and neural networks classifiers
Detection of uveal melanoma using fuzzy and neural networks classifiersDetection of uveal melanoma using fuzzy and neural networks classifiers
Detection of uveal melanoma using fuzzy and neural networks classifiersTELKOMNIKA JOURNAL
 
Report medical image processing image slice interpolation and noise removal i...
Report medical image processing image slice interpolation and noise removal i...Report medical image processing image slice interpolation and noise removal i...
Report medical image processing image slice interpolation and noise removal i...Shashank
 

Similar to Detection of Lie by Involuntary Physiological Phenomena using Distance Camera (20)

An Ear Recognition Method Based on Rotation Invariant Transformed DCT
An Ear Recognition Method Based on Rotation Invariant  Transformed DCT An Ear Recognition Method Based on Rotation Invariant  Transformed DCT
An Ear Recognition Method Based on Rotation Invariant Transformed DCT
 
Analysis and Classification of Skin Lesions Using 3D Volume Reconstruction
Analysis and Classification of Skin Lesions Using 3D Volume ReconstructionAnalysis and Classification of Skin Lesions Using 3D Volume Reconstruction
Analysis and Classification of Skin Lesions Using 3D Volume Reconstruction
 
Powerful processing to three-dimensional facial recognition using triple info...
Powerful processing to three-dimensional facial recognition using triple info...Powerful processing to three-dimensional facial recognition using triple info...
Powerful processing to three-dimensional facial recognition using triple info...
 
Parameterized Image Filtering Using fuzzy Logic
Parameterized Image Filtering Using fuzzy LogicParameterized Image Filtering Using fuzzy Logic
Parameterized Image Filtering Using fuzzy Logic
 
F ACIAL E XPRESSION R ECOGNITION B ASED ON E DGE D ETECTION
F ACIAL  E XPRESSION  R ECOGNITION  B ASED ON  E DGE  D ETECTIONF ACIAL  E XPRESSION  R ECOGNITION  B ASED ON  E DGE  D ETECTION
F ACIAL E XPRESSION R ECOGNITION B ASED ON E DGE D ETECTION
 
Fuzzy and entropy facial recognition [pdf]
Fuzzy and entropy facial recognition  [pdf]Fuzzy and entropy facial recognition  [pdf]
Fuzzy and entropy facial recognition [pdf]
 
Fuzzy and entropy facial recognition [pdf]
Fuzzy and entropy facial recognition  [pdf]Fuzzy and entropy facial recognition  [pdf]
Fuzzy and entropy facial recognition [pdf]
 
F0164348
F0164348F0164348
F0164348
 
Iisrt2 dwarakesh(9 11)
Iisrt2 dwarakesh(9 11)Iisrt2 dwarakesh(9 11)
Iisrt2 dwarakesh(9 11)
 
Efficient Small Template Iris Recognition System Using Wavelet Transform
Efficient Small Template Iris Recognition System Using Wavelet TransformEfficient Small Template Iris Recognition System Using Wavelet Transform
Efficient Small Template Iris Recognition System Using Wavelet Transform
 
50120140504007
5012014050400750120140504007
50120140504007
 
A mathematical model of movement in virtual reality through thoughts
A mathematical model of movement in virtual reality through thoughts A mathematical model of movement in virtual reality through thoughts
A mathematical model of movement in virtual reality through thoughts
 
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
 
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
 
C10162
C10162C10162
C10162
 
S IGNAL A ND I MAGE P ROCESSING OF O PTICAL C OHERENCE T OMOGRAPHY AT 1310 NM...
S IGNAL A ND I MAGE P ROCESSING OF O PTICAL C OHERENCE T OMOGRAPHY AT 1310 NM...S IGNAL A ND I MAGE P ROCESSING OF O PTICAL C OHERENCE T OMOGRAPHY AT 1310 NM...
S IGNAL A ND I MAGE P ROCESSING OF O PTICAL C OHERENCE T OMOGRAPHY AT 1310 NM...
 
Breast boundary detection in mammogram using entropy
Breast boundary detection in mammogram using entropyBreast boundary detection in mammogram using entropy
Breast boundary detection in mammogram using entropy
 
Detection of uveal melanoma using fuzzy and neural networks classifiers
Detection of uveal melanoma using fuzzy and neural networks classifiersDetection of uveal melanoma using fuzzy and neural networks classifiers
Detection of uveal melanoma using fuzzy and neural networks classifiers
 
Report medical image processing image slice interpolation and noise removal i...
Report medical image processing image slice interpolation and noise removal i...Report medical image processing image slice interpolation and noise removal i...
Report medical image processing image slice interpolation and noise removal i...
 
E0543135
E0543135E0543135
E0543135
 

Recently uploaded

call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppCeline George
 
Concept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfConcept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfUmakantAnnand
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentInMediaRes1
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docxPoojaSen20
 
MENTAL STATUS EXAMINATION format.docx
MENTAL     STATUS EXAMINATION format.docxMENTAL     STATUS EXAMINATION format.docx
MENTAL STATUS EXAMINATION format.docxPoojaSen20
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Celine George
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesFatimaKhan178732
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 

Recently uploaded (20)

call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website App
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
Concept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfConcept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.Compdf
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media Component
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docx
 
MENTAL STATUS EXAMINATION format.docx
MENTAL     STATUS EXAMINATION format.docxMENTAL     STATUS EXAMINATION format.docx
MENTAL STATUS EXAMINATION format.docx
 
Staff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSDStaff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSD
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and Actinides
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 

Detection of Lie by Involuntary Physiological Phenomena using Distance Camera

  • 1. Detection of Lie by Involuntary Physiological Phenomena using Distance Camera Hirotomo Kato, Isao Nishihara, Hironari Matsuda, and Takayuki Nakata Department of Electrical and Computer, Faculty of Engineering Toyama Prefectural University Imizu City, Toyama, 939-0398 Japan E-Mail : nishihara@pu-toyama.ac.jp, nakata@pu-toyama.ac.jp Abstract— In this paper, we verify whether there is a change in the brightness value of the face when a person lies, using a distance camera capable of extracting feature points of the face. We propose a LieCount value showing characteristic luminance fluctuation. We constructed the lie detection algorithm using these values. There was a significant difference between the LieCount value when lying and more than half of examinees when LieCount value when not lying, confirming that there is a possibility of constructing the lie detection system. Keywords-component; Lie detection; Distance Camera; Digital Image Processing; LieCount; I. INTRODUCTION In recent, communication between people is important. Grasp of human intention in communication is very important. In order to contact people well, in our daily lives we discriminate between lies and joke consciously or unconsciously from the change in expression and voice pitch accompanying utterances. It is a very important ability to understand hidden nuances that are not spoken in such human conversation. Recently, in order to make a robot and a human being to talk smoothly, research has been made to make a machine distinguish human emotions. The research result has been put to practical use as a part of products, and it is possible for ordinary people to purchase it. (Figure 1) Figure 1. Example of a humanoid robot for communication International Journal of Computer Science and Information Security (IJCSIS), Vol. 15, No. 9, September 2017 172 https://sites.google.com/site/ijcsis/ ISSN 1947-5500
  • 2. The robot in the figure is called “pepper” and has many sensors such as an ultrasonic sensor, an auditory sensor, an RGB camera, a range camera, and etc. Human voice and expression are acquired from these sensor values, and emotion is estimated from these information. When people are happy they are pleased, and when people are sad they try to cheer. In this way, pepper remains only to recognize the human emotions, but the machine does not understand lies and joke. It is possible to realize sophisticated communication just like a human being if it can solve it. Therefore, in such a system, in order to realize a robot that can conduct a conversation like a human being, a system for automatically detecting a lie by a machine is necessary. In this paper, in order to realize a robot capable of sophisticated conversation, without using a machine which attach to a human, by using unconscious physiological information, it is possible to automatically detect lies for an examinee to intentionally not hide clues of lies. If it becomes possible to detect lies automatically without wearing a machine, it can be expected to detect multiple lies by one camera. The conventional lie detection method is based on the premise that a contact type sensor is mainly attached to a person. As an example of these methods, polygraph examination is cited, and at this time, respiration, blood pressure, pulsation, skin electrical activity, etc. are used as judgment materials[1]. As a lie detection method using a non-contact type sensor, there is a method of using a microphone and a camera to judge by using gaze, facial expression, and prosody[2]. However, there is a problem that examinees of these lies can be hidden intentionally. The following physiological phenomena are said to be unable to intentionally hide when a person lies as a general theory[3]. · Flushing of the face · pallor · Change in respiration · Heartbeat fluctuation · Sweating · Expansion of pupil diameter The flushing of the face is thought to be caused by an increase in tension due mainly to lying, an increase in blood flow, and physiologically well explained explanation. In this paper, we decided to use the cheek color corresponding to the flushing of the face as the information which Kinect can obtain most conveniently. II. LIE DETECTION METHOD USING KINECT A. Face position detection 'Kinect' camera is operating distance camera and RGB camera at the same time. ‘Kinect Face Tracking SDK’ is used to obtain the red brightness value of the face. A depth value is acquired using a distance camera, and characteristic points of the face are acquired from the depth value. Many of the feature points to be acquired are scattered and concentrated on a part where the move is large among human faces such as nose, eyes, mouth and so on. Red luminance values can be acquired from as many as 108 feature points in total. The state of 108 feature points is shown in Figure 2. In the figure, detected feature points in the face are simultaneously drawn. Figure shows the result of actually tracking the head, even if the examinee moves the head. The luminance value of the part to be acquired can be obtained. International Journal of Computer Science and Information Security (IJCSIS), Vol. 15, No. 9, September 2017 173 https://sites.google.com/site/ijcsis/ ISSN 1947-5500
  • 3. Figure 2. Example of a figure caption. (figure caption) B. Picking up color When outputting the luminance data of the feature point by only one pixel data, there are cases where measurement noise or noise arising from characteristic point shift may appear. Due to these causes, large data variations may occur. Therefore, noise is reduced by averaging brightness values in a certain range around feature points. In this method, the average of the red luminance values in 5 × 5 pixels area, is output as the red luminance value with the feature point as the center as shown in figure 3. The calculation formula of the average of the red luminance values is shown in equation (1). In this equation, R is the luminance value of Red, x is the coordinate of the horizontal pixel, y is the coordinate of the vertical pixel, and  ,R x y is the red luminance value of the position of the pixel of the feature point. Figure 3. Range around the detection point 22 2 2 ( , ) 25 yx ave i x j y R i j R            C. Band pass filter (BPF) Consider removing the change of the luminance value other than the change of the luminance value which changes by lying as noise. In fact even in preliminary experiments, it was confirmed that fine measuring noise fluctuating about 1 in luminance value International Journal of Computer Science and Information Security (IJCSIS), Vol. 15, No. 9, September 2017 174 https://sites.google.com/site/ijcsis/ ISSN 1947-5500
  • 4. exists. Also, due to the nature of the examinees, humans unconsciously move the body due to respiration or pulse, which may cause noise. We also consider removing these noises. Table 1 summarizes what is considered noise. TABLE I. PROPERTIES OF NOISES TO BE CONSIDERED Noise factor Period(sec) Frequency(Hz) Measurement noise <1 >1 Heartbeat noise 1-2 0.5-1 Breath noise 3-5 0.2-0.3 Long-cycle noise >20 <0.05 In order to remove all these noises, a band pass filter that transmits only the frequency component between 0.05 Hz and 0.2 Hz was applied. In fact, aliasing distortion due to the sampling frequency of 60 Hz occurs, so that frequency components between 29.8 Hz and 29.95 Hz were also generated. Finally, the band pass filter ( )B f is shown in calculation equation (2). In this equation, f is frequency of input signal. 1 ( ) 0 (0.05 0.2&29.8 29.95) ( 0.05&0.2 29.8& 29.95) f f f B f f f               The results after passing the measured original data and BPF are shown in figure 4. The horizontal axis of the figure is time and the vertical axis is luminance value of red. The red line is the original data, and the blue line is the results after passing through BPF. Figure 4. Example of a figure caption. (figure caption) D. Select using points As a preliminary experiment, processing up to this point was performed for all 108 parts of face parts positions detected. As a result, the four points of eye holes (eye socket), right cheek, left cheek and jaw, which are shown in figure 5, have the largest dispersion of R luminance values. Then only these four points were used afterwards. These points numbers are No. 24, 39, 74, and 103, respectively. International Journal of Computer Science and Information Security (IJCSIS), Vol. 15, No. 9, September 2017 175 https://sites.google.com/site/ijcsis/ ISSN 1947-5500
  • 5. Figure 5. Four feature points for picking up luminance R. E. Estimating LieCount In this section, we construct a system to judge whether or not the person is lying. We propose a new indicator for lie detection called ‘LieCount’. It can be confirmed that the red luminance value is lower when the person is lying than when not lying. For this time, we aim to detect lies using this property. However, although it is ideal that it is possible to output the detection result of the lie for each frame in real time, since the luminance value is acquired by using the camera, there is a problem that measurement noise will inevitably occur every frame. That is, if it is tried to judge whether it is a lie or not for each frame, there is a possibility that the lie detection result will be greatly changed by the measurement noise. Therefore, since the number of frames for one section is 600 frames, a lie is detected by utilizing the opportunity to judge whether it is 600 times or not. That is, if the red luminance value of one unit in one unit is lower than the average of the red luminance value of one unit out of the 600 frames, when the number of the section is less than the number of the sections, the examinee lies is decided to have a high probability of being attached. Therefore, the higher the value of LieCount, the higher the likelihood that the examinee is lying. Below is the procedure of detection of lie. First, we add up and average 30 sections of lie data by equation (3), and 120 sections of true data by equation (4). 30 1 (( ) ) ( ) (f=1,2, ,600) 30 l i f LieAve f i R R       120 1 (( ) ) ( ) (f=1,2, ,600) 120 t j f TruthAve f j R R      The difference between the absolute value of the truth data and the absolute value of the lie data obtained by addition and averaging by equation (5) is taken and a value obtained by averaging the difference value by equation (6) is taken as the weight of the feature point. International Journal of Computer Science and Information Security (IJCSIS), Vol. 15, No. 9, September 2017 176 https://sites.google.com/site/ijcsis/ ISSN 1947-5500
  • 6. (R ) ( ) ( )D f TruthAve f LieAve fR R    600 1 ( ) 600 D f Weighting f R R      Table 2 shows the calculation result of the weight of each feature point by the above defined equations. TABLE II. WEIGHT OF EACH FEATURE POINT FeaturePoint No.24 No.39 No.74 No.103 WeightingR 1.09 0.79 0.71 0.83 Luminance values for one unit are averaged by equation (7) 2 3 4 5 1 ( ) {( ) ( ) ( ) ( ) ( ) } 5 Ave f Lie f Truth f Truth f Truth f Truth fR R R R R R       If the luminance value of one section is smaller than the average of luminance values of one unit ( )Ave fR , .24 .39 .74, ,No No NoD D D and .103NoD are set to 1. , 2, 3, 4, 5 .24,39,74,103 , 2, 3, 4, 5 1 ( ) ( ) 0 ( ) ( ) Lie Truth Truth Truth Truth f Ave f No Lie Truth Truth Truth Truth f Ave f if R R D if R R       Finally, we sum by the weighting factor in Table 2 and calculate LieCount according to equation (9). 600 .24 .39 .74 .103 1 (1.09 ( ) 0.79 ( ) 0.71 ( ) 0.83 ( ) )No f No f No f No f f LieCount D D D D             Finally, if LieCount is a large value, it is judged to be a lie. III. EXPERIMENT AND EVALUATION Sit on the chair for the examinee and conduct the experiment. The distance between the camera and the examinee is about 1.5 m. Because the head tracking is done with Kinect, even if the examinee moves the head, it is possible to acquire the luminance value, but if it moves greatly, the detected points shift and the luminance value greatly fluctuates. For this time, since we focus on whether it is possible to detect a lie from the luminance value data obtained from the camera, we decided to fix the face on the stand. Therefore, International Journal of Computer Science and Information Security (IJCSIS), Vol. 15, No. 9, September 2017 177 https://sites.google.com/site/ijcsis/ ISSN 1947-5500
  • 7. the chin is placed on the stand, and the examinee carries out the experiment by placing his / her head on this chin rest. The environment of the experiment is shown in figure 6. Figure 6. Experiment environment In this paper, we conducted an experiment with reference to the method of card inspection of polygraph inspection. The experimental procedure is shown below. 1. Prepare a total of 5 actual playing cards, "Clover 1", "Heart 4", "Heart 7", "Clover 8", "Diamond 10". 2. The examinee pick one random card from the actual cards. 3. Randomly present cards on the display (an example of the cards displayed on the display in figure 7). 4. The card is showed to the examinee for 20 seconds. 5. The recorded question "Is this card the card you selected?" is played on the speaker. 6. The examinee always answer "no". 7. Repeat the above step 4-6 for the required number of cards. Figure 7. Example of presentation card Of the five prepared cards, we define "cards of lies" for cards picked up by the examinees, and four other cards as "cards of truth". In addition, we chose "Lie's card" and "Truth card" to present so that the numbers and marks will vary. The reason for this is that the examinees misunderstand if they have the same numbers or the same marks. At the beginning of the experiment, six pieces of cards not related at all to experiments which were not "cards of lies" and "cards of truth" were presented to the display continuously. In this research, we refer to cards that have no relation to this experiment as dummy card to cool down. The reason for presenting International Journal of Computer Science and Information Security (IJCSIS), Vol. 15, No. 9, September 2017 178 https://sites.google.com/site/ijcsis/ ISSN 1947-5500
  • 8. this card is that in the case of examinee who are not accustomed to experiments, the physiological response is not stable from the tension at which the experiment starts. Even in an actual polygraph examination, we are doing some questions first to calm the examinee. In this research, this is defined as ‘1 unit’ with one card of lie and 4 cards of truth. Between the unit and the unit, 1 dummy card is inserted. The reason for presenting is to prevent the influence of the previous unit from continuing to the next unit. One experiment with all units is called ’1 set’ in this paper. The playing procedure of the cards is shown in figure 8. In this figure, green, blue, red cards mean dummy, true, lie cards, respectively. Figure 8. Procedure for playing cards Figure 9 shows the result of calculated LieCount for six examinees. Each figure shows the average value and variance of the LieCount values in all the sets when lying or not lying. The first examinee No.1 is the same examinee for the preliminary experiment conducted in the previous section. International Journal of Computer Science and Information Security (IJCSIS), Vol. 15, No. 9, September 2017 179 https://sites.google.com/site/ijcsis/ ISSN 1947-5500
  • 9. (a) LieCount for examinee No.1 (b) LieCount for examinee No.2 (c) LieCount for examinee No.3 (d) LieCount for examinee No.4 (e) LieCount for examinee No.5 (f) LieCount for examinee No.6 Figure 9. Results of LieCount values It means that the higher the value of LieCount calculated by the conventional method, the higher the likelihood of lying. As expected, overall, it was confirmed that the value of LieCount when lying is larger. Especially, it is thought that examinee No.1 was the examinee of the preliminary experiment because the difference between LieCount when examinee No.1 lied and when not lying was large. In examinee No.3, it is confirmed that LieCount is larger than that of Lie, and it can be confirmed that there is a case that the red luminance value of the eye socket, the right cheek, the left cheek, and the jaw does not decrease even when lying . Finally, there was no difference in the value of LieCount in examinee No.4, and it can be confirmed that there was no change in the red luminance values of the examinee around the eye socket, the right cheek, the left cheek and the jaw as well in this examinee. The average value of LieCount at the time when four of the six examinees lied led to a higher result and confirmed the possibility of LieCount’s detection of a lie. International Journal of Computer Science and Information Security (IJCSIS), Vol. 15, No. 9, September 2017 180 https://sites.google.com/site/ijcsis/ ISSN 1947-5500
  • 10. In this experiment, the variation of data was large, and as shown in the figure, the standard deviation was relatively larger than the difference between True and Lie. This suggests that the LieCount value when not lying is likely to exceed the LieCount value when lying. Therefore, it can be said that it is difficult to evaluate this lie detection system. Evaluation by score was carried out in order to evaluate the possibility that the examinees could lying only once while five sections in one unit. Among 5 trials of 1 unit, scores of 5, 4, 3, 2, 1 point are allocated in order from the highest value of LieCount, and the scores of 30 units per examinee are averaging. In other words, the higher the Score, the more LieCount gets the higher rank in each unit. The results are shown in Table 3. TABLE III. AVERAGE SCORES IN ALL UNITS Examinee Lie Truth No.1 130 80 No.2 114 84 No.3 111 85 No.4 92 90 No.5 85 91 No.6 98 88 When trying to find a lie under the premise that only one out of five answers lies, it was suggested that the score of 5 examinee including No.1 is high and the accuracy of lie detection is high. In addition, although only the test subject number 5 was the opposite result, this result is different from FIG. This suggests that there is a possibility of correct lie determination by combining these determinations. When trying to detect a lie under the precondition that only one lie out in five answers, the detection of the lie of four persons including No.1 is high, it was suggested that accuracy is possible. Also, although only examinee No. 5 has been reversed result, this result is different from Figure 9. This suggests that combining these determinations can make a correct lie determination. IV. SUMMARY In this paper, by using the distance camera with the RGB camera, the fundamental recognition technology of the lie detection was examined by the RGB information at the flesh color position specified by the distance camera. In the future, we plan to consider the lie detection method when there is no lie or when the number of times of lying is plural. In addition, we aim to detect lies using various parameters such as flushing / pallor, pulsation of the face, whose subjects are said to be difficult to camouflage cryptic lies. Finally, we will investigate the lie detection method using multidimensional space with multiple parameters, and further consider a neuron-like judgment method using Deep Learning. REFERENCES [1] Patrick, C. J., & Iacono, W. G. (1989). Psychopathy, threat, and polygraph test accuracy. Journal of Applied Psychology, 74(2), 347-355. doi:10.1037/0021-9010.74.2.347 [2] Y. Ohmoto, K.Ueda, and T. Ohno "Real-time system for measuring gaze direction and facial features: towards automatic discrimination of lies using diverse nonverbal information", AI & SOCIETY, Vol.23 Issue 2, pp.187-200 (2009), issn=1435-5655 doi:10.1007/s00146-007-0138-x [3] Charles V. Ford, ”Lies! Lies!! Lies!!!: The Psychology of Deceit”, Amer Psychiatric Pub ISBN-13: 978-0880487399 (1996) [4] S. Hamaki, S. Nakano, and I. Nishihara, “A Study on Human Motion Detection Method with Range Camera”, ITE Annual Convention 2011, 6-2 (2011) (written in japanese, 邦題” 距離カメラを用いた人物の行動検出法の検討”) doi: 10.11485/iteac.2011.0_6_2 International Journal of Computer Science and Information Security (IJCSIS), Vol. 15, No. 9, September 2017 181 https://sites.google.com/site/ijcsis/ ISSN 1947-5500
  • 11. [5] H. Kato, H. Matsuda, and T. Nakata “Method of estimation lies using distance camera to detect unconscious physiological phenomenon”, International Workshop of Advanced Image Technology 2016 (IWAIT2016), 3C-6 (2016) AUTHORS PROFILE Hirotomo Kato received his B.E. and M.E., degrees from Toyama Prefectural University in 2015 and 2017 respectively. His interest icludes digital signal processing. Isao Nishihara received his B.E., M.E., and Ph.D. degrees in Physical Information Engineering from Tokyo Institute of Technology in 1995, 1997, and 2000 respectively. He is now the assistant professor in the Faculty of Engneering, at Toyama Prefectural University in Japan. His research interests include digital video image processing, human interfaces, and virtual 3D world. Hironari Matsuda received his B.S., M.S., and Ph.D. degrees in physics from the University of Tokyo, Tokyo, Japan, in 1976, 1978, and 1982, respectively. Since he joined Hitachi Ltd. in 1982, he has been engaged in research and development on photonic transmission subsystems. In 2003, he joined Toyama Prefectural University, and is currently a Professor with the Faculty of Engineering. His research interests include photonic transmission systems, photonic access networks, and photonic switching systems. Takayuki Nakata received his B.E., M.E., and Ph.D. degrees from Kanazawa University in 1998, 2001 and 2004 respectively. From 2002-2004 he was a special research student in Yokohama National University. In 2004, he joined the Faculty of Engineering, Toyama Prefectural University, where he is currently an associate professor. His research interests include recognition of 3D object, 3D display, etc. International Journal of Computer Science and Information Security (IJCSIS), Vol. 15, No. 9, September 2017 182 https://sites.google.com/site/ijcsis/ ISSN 1947-5500