Personality and Individual Differences 51 (2011) 764–768
Contents lists available at ScienceDirect
Personality and Individual Differences
j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / p a i d
Emotional intelligence and social perception
Kendra P.A. DeBusk, Elizabeth J. Austin ⇑
Department of Psychology, School of Philosophy, Psychology, and Language Sciences, University of Edinburgh, 7 George Square, Edinburgh EH8 9JZ, UK
a r t i c l e i n f o a b s t r a c t
Article history:
Received 18 March 2011
Received in revised form 22 June 2011
Accepted 24 June 2011
Available online 23 July 2011
Keywords:
Emotional intelligence
Social perception
Cross-race
Cross-cultural
0191-8869/$ - see front matter � 2011 Elsevier Ltd. A
doi:10.1016/j.paid.2011.06.026
⇑ Corresponding author.
E-mail address: [email protected] (E.J. Au
One of the key facets of emotional intelligence (EI) is the capacity of an individual to recognise emotions
in others. However, this has not been tested cross-culturally, despite the body of research indicating that
people are better at recognising facial affect of members of their own culture. Given the emotion recog-
nition aspect of EI, it would seem that EI should be related to correctly identifying emotion in others
regardless of race. In order to test this, a social perception inspection time task was carried out in which
participants (41 Caucasian and 46 Far-East Asian) were required to identify the emotion on Caucasian and
Far-East Asian faces that were happy, sad, or angry. Results from this study indicate that EI was not
related to correctly identifying facial expressions. The results did confirm that participants are better able
to recognise people of their own ethnicity, though this was only applicable to negative emotions.
� 2011 Elsevier Ltd. All rights reserved.
1. Introduction
Emotion perception is an important capability which impacts
the ability of individuals to negotiate their social environment.
There is evidence that the ability to perceive others’ emotions is af-
fected by whether the target person is a member of the same racial
or cultural group as the perceiver. This phenomenon is conceptu-
ally linked to that of facial recognition as a function of target
race/culture. In order to place the literature of cross-race and
cross-culture facial emotion recognition in context, we first review
the literature on cross-group face recognition.
A meta-analysis (Meissner & Brigham, 2001) indicated a robust
own-race bias in memory for faces. The theoretical interpretation
of this phenomenon has been based on the idea that greater expo-
sure to an individual’s own racial group than to other groups al-
lows them to develop greater expertise in recognising own-race
faces. More detailed studies have linked this performance advan-
tage to more efficient encoding and greater use of holistic process-
ing when the target is an own-race face (e.g. Michel, Caldara, &
Rossion, 2006; Walker & .
Personality and Individual Differences 51 (2011) 764–768Cont.docx
1. Personality and Individual Differences 51 (2011) 764–768
Contents lists available at ScienceDirect
Personality and Individual Differences
j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c
a t e / p a i d
Emotional intelligence and social perception
Kendra P.A. DeBusk, Elizabeth J. Austin ⇑
Department of Psychology, School of Philosophy, Psychology,
and Language Sciences, University of Edinburgh, 7 George
Square, Edinburgh EH8 9JZ, UK
a r t i c l e i n f o a b s t r a c t
Article history:
Received 18 March 2011
Received in revised form 22 June 2011
Accepted 24 June 2011
Available online 23 July 2011
Keywords:
Emotional intelligence
Social perception
Cross-race
Cross-cultural
0191-8869/$ - see front matter � 2011 Elsevier Ltd. A
doi:10.1016/j.paid.2011.06.026
⇑ Corresponding author.
E-mail address: [email protected] (E.J. Au
One of the key facets of emotional intelligence (EI) is the
2. capacity of an individual to recognise emotions
in others. However, this has not been tested cross-culturally,
despite the body of research indicating that
people are better at recognising facial affect of members of
their own culture. Given the emotion recog-
nition aspect of EI, it would seem that EI should be related to
correctly identifying emotion in others
regardless of race. In order to test this, a social perception
inspection time task was carried out in which
participants (41 Caucasian and 46 Far-East Asian) were required
to identify the emotion on Caucasian and
Far-East Asian faces that were happy, sad, or angry. Results
from this study indicate that EI was not
related to correctly identifying facial expressions. The results
did confirm that participants are better able
to recognise people of their own ethnicity, though this was only
applicable to negative emotions.
� 2011 Elsevier Ltd. All rights reserved.
1. Introduction
Emotion perception is an important capability which impacts
the ability of individuals to negotiate their social environment.
There is evidence that the ability to perceive others’ emotions is
af-
fected by whether the target person is a member of the same
racial
or cultural group as the perceiver. This phenomenon is
conceptu-
ally linked to that of facial recognition as a function of target
race/culture. In order to place the literature of cross-race and
cross-culture facial emotion recognition in context, we first
review
the literature on cross-group face recognition.
A meta-analysis (Meissner & Brigham, 2001) indicated a robust
3. own-race bias in memory for faces. The theoretical
interpretation
of this phenomenon has been based on the idea that greater
expo-
sure to an individual’s own racial group than to other groups al-
lows them to develop greater expertise in recognising own-race
faces. More detailed studies have linked this performance
advan-
tage to more efficient encoding and greater use of holistic
process-
ing when the target is an own-race face (e.g. Michel, Caldara, &
Rossion, 2006; Walker & Tanaka, 2003). A complex socially-
deter-
mined process underlying the own-race recognition bias is indi-
cated by studies showing that there is a more general in-group
recognition advantage and greater holistic processing when the
target stimuli are faces belonging to social groups unrelated to
race, for example when same-race face stimuli are identified as
being pictures of students either at the same or a different
univer-
sity to the participant’s, and even when using a minimal-group
paradigm in which arbitrary social groups are defined and
assigned
ll rights reserved.
stin).
to the participants and stimuli (Bernstein, Young, & Hugenberg,
2007). Evidence that there is also an effect of emotional state
on
face recognition biases comes from a study showing that the
own-race bias is reduced by a positive mood induction, which is
suggested to be due to positive mood enhancing one or both of
holistic processing and more inclusive social categorisation
(Johnson & Fredrickson, 2005).
Given the results on cross-race and cross-group biases in face
4. recognition, it is reasonable to expect that similar effects might
be found in the identification of facial expressions of emotion.
Ekman’s (1968) pioneering research showed that facial
expressions
of emotion are similar across cultures and races, and a meta-
anal-
ysis by Elfenbein and Ambady (2002) confirmed that cross-
group
emotion recognition occurs at better than chance levels.
However,
there is also an in-group advantage, i.e. higher emotion
recognition
accuracy is found when the perceiver and target both belong to
the
same national, ethnic or regional group. There is also an
exposure
effect: cross-group emotional recognition accuracy is higher for
groups which have more contact with each other, and minority
group members are better at judging the emotions of majority
group members than vice versa. Explanations of this effect have
mainly focused on the existence of cultural differences which
mod-
erate the appearance of facial expressions of emotion; such
differ-
ences have been referred to as facial ‘‘dialects’’ (Elfenbein &
Ambady, 2003). This phenomenon, together with the effects of
the degree of familiarity with such expression variations
(depend-
ing on amount of contact with the other group), would account
for
the pattern of results discussed above. Such an interpretation is
supported by studies in which the in-group advantage for
emotion
recognition has been found when the group membership of the
targets cannot be determined by the perceivers (e.g. US
American
5. http://dx.doi.org/10.1016/j.paid.2011.06.026
mailto:[email protected]
http://dx.doi.org/10.1016/j.paid.2011.06.026
http://www.sciencedirect.com/science/journal/01918869
http://www.elsevier.com/locate/paid
K.P.A. DeBusk, E.J. Austin / Personality and Individual
Differences 51 (2011) 764–768 765
Caucasian perceivers judging a target set comprising pictures of
Caucasians from several cultures, Elfenbein & Ambady, 2002).
It
is also possible that the processing differences for in- and out-
group faces found in the face recognition studies discussed
earlier
play a role in the recognition of facial expressions.
Related research on the recognition and attribution of specific
emotions has shown that in a neutral social context smiling is
attrib-
uted more often to in-group than out-group members (Beaupré
&
Hess, 2003), whilst angry faces are more frequently categorised
as
belonging to an out-group member (Dunham, 2011). Response
latencies for emotion recognition are also moderated by group
membership, as shown in two studies by Hugenberg (2005), who
found that European Americans identified happy facial
expressions
more rapidly than sad or angry expressions when the target face
was
White, but that this effect reversed if the target face was Black.
A
moderating effect of emotional response by target group
member-
6. ship has also been found, with emotional mimicry of fear and
anger
being found to be more pronounced for in-group compared to
out-
group targets (van der Schalk et al., 2011).
The results of studies which show that emotions play a role in
the recognition, attribution and response to facial expressions of
in- and out-groups provides a motivation for examining the role
of individual differences in these processes. Emotional
intelligence
(EI) is a candidate variable to study in this context, since high
scor-
ers on EI would be expected to show superior emotion
recognition
performance, and would also be expected to be better able to
over-
ride biases which might lead to facial expressions being misread
(for example being more capable of taking account of cultural
vari-
ations in emotional expressions, or of counteracting target
group-
related biases in the perception of positive and negative
emotions).
There have been no studies of the effects of EI on cross-group
emo-
tion recognition, but a number of studies have linked EI with
better
performance on emotion and social perception tasks (e.g.
Austin,
2005; Petrides & Furnham, 2003). Based on the theoretical and
empirical linkages between EI and emotion perception, it is
reason-
able to assume that high-EI individuals should be more
successful
at perceiving the emotions of others regardless of race. To test
7. this
hypothesis, the present study was carried out to examine how EI
is
related to success in a cross-racial emotional inspection time
task.
Two types of EI measure were included, a trait (self-report) EI
test
and an ability EI test which tests emotion-related problem-
solving.
For more detailed discussions of trait and ability EI see
Petrides,
Pita, and Kokkinaki (2007) and Mayer, Roberts, and Barsade
(2008).
2. Pilot study
2.1. Pilot participants
The participants were post-graduate students recruited by
email. There were a total of twenty participants. Sex, age, and
race
were not recorded for the pilot study.
2.2. Pilot Measures
2.2.1. NimStim Stimuli
The photographs used for this study were part of the NimStim
face stimulus set (Tottenham et al., 2009). The pilot study was
car-
ried out to determine which photographs to retain for the main
study from a selection of Caucasian and Asian faces from this
set.
The photographs utilised were two Far-East Asian females, two
Caucasian females, two Caucasian males, and one Far-East
Asian
male. While the original intention had been to provide
participants
8. with two photographs for each gender for both races, only one
Asian male photograph was available from the stimulus set.
Additional information regarding the specific origin of the
Asian
models was not available.
A total of 71 colour photographs were used, consisting of five
fe-
male Asian models, and four female Caucasian models, while
for the
males there was one Asian model and five Caucasian models.
The
photographs were shown in a non-timed power point
presentation.
The facial expressions in all of the photographs were shown
with
closed mouths, and none of the male models used had facial
hair.
Though the intent was to use only happy, sad, and angry expres-
sions in the final experiment, the facial expressions shown on
the
photographs were angry, happy, sad, surprised, and disgusted in
random order to provide more variety. In addition, a broader
range
of facial expression would help avoid social desirability in the
re-
sponses of the participants because they would not know which
expressions were going to be used for the final study.
The participants were each given a questionnaire on which they
were asked to identify the expression on each photograph. For
each
of the photographs, they were able to choose: happy, sad,
neutral,
angry, disappointed, disgusted, calm, excited, surprised,
frightened,
9. or other. The participant viewed each photograph, marked the
expression given which s/he felt best described the facial
expres-
sion shown, then moved on to the next photograph. They were
told
to not return and change any answers.
2.3. Pilot results
The results of the pilot study indicated that 100% of the partici-
pants agreed on the facial expressions of 11 of the photographs.
Of
these photographs, there were two Asian females and one
Caucasian
female that had 100% agreement on at least one photograph. For
the
males there were three Caucasian males that had full agreement
on
at least one photograph. The agreement response for the rest of
the
photographs for these models was observed. Given that the full
study would involve a forced choice between happy, sad, and
angry,
only these facial expressions were considered at this point.
The percentage of agreement for the photographs chosen for the
final study can be seen in Table 1. All of these percentages were
deemed acceptable levels of agreement. They also corresponded
with the initial validity study of this stimulus set (Tottenham
et al., 2009).
3. Main study methods
3.1. Participants
The participants were recruited via an advertisement posted by
the student Careers Service which specified the need for
10. participants
to be of either British Caucasian or Far-East Asian descent. The
cate-
gory of Far-East Asian was further defined in the advertisement
as
people from China, Japan, Vietnam, or Taiwan. All participants
were
paid £5 for their participation in the study. The final sample
com-
prised forty-one British Caucasians and forty-six Far-East
Asians.
Participants were asked their age and race, then given instruc-
tions on how to complete the inspection time task. They were
re-
quested to fill in the EI measures upon completion of the
inspection time task.
3.2. Facial affect perception inspection time task
The facial affect perception inspection time task involved a
total
of 105 trials in which participants had to identify faces as
happy,
sad, or angry. The task was comparable to the ones used by
Austin
(2005). Each person was shown with a happy, sad, and angry
facial
expression. The durations for which each picture was displayed
were 25 ms, 75 ms, 100 ms, 150 ms, and 200 ms, with the order
Table 1
Percentage of agreement for facial expressions used in final
study.
11. Sex Nationality % Agreement:
happy
% Agreement:
sad
% Agreement:
angry
Female Asian 100 82 94
Female Asian 100 94 88
Female Caucasian 100 94 100
Female Caucasian 94 94 94
Male Asian 94 59 94
Male Caucasian 100 100 71
Male Caucasian 100 94 100
Table 2
Descriptive statistics for trait and ability EI, age, and inspection
time tasks total
percentage correct.
N Range Mean Std. deviation
Age 87 19.00 22.91 3.63
Ability EI 86 44.00 44.99 10.92
Trait EI 87 57.00 123.07 12.59
a_f_percenta 87 90% 54% 0.21
a_f_percenth 87 50% 96% 0.08
a_f_percents 87 40% 93% 0.09
a_m_percenta 87 100% 66% 0.26
a_m_percenth 87 80% 96% 0.11
a_m_percents 87 80% 78% 0.20
c_f_percenta 87 70% 79% 0.18
c_f_percenth 87 10% 98% 0.04
12. c_f_percents 87 80% 75% 0.20
c_m_percenta 87 80% 72% 0.19
c_m_percenth 87 30% 97% 0.07
c_m_percents 87 70% 90% 0.13
766 K.P.A. DeBusk, E.J. Austin / Personality and Individual
Differences 51 (2011) 764–768
of presentation randomized over target, expression and duration.
Participants were given a forced choice response of happy, sad,
or angry for each of the faces, having to press 1 for happy, 2 for
sad, and 3 for angry. These were the only keys for which a
response
could be recorded in order to avoid an invalid response for any
of
the items. The numbers corresponding with each emotion’s re-
sponse were shown after each photograph to remind the partici-
pant of the choices, and the screen with these options was
shown until the participant input a response. The total time of
the task was between 5 and 7 min depending on how quickly the
participant responded to the pictures.
3.3. Trait EI
The Schutte et al. (1998) emotional intelligence scale is a
33-item self report measure of trait emotional intelligence. This
scale has been validated in several studies (e.g. Chapman &
Hay-
slip, 2005; Saklofske, Austin, & Minski, 2003).
3.4. Ability EI
Ability EI was measured using the Test of Emotional
Intelligence
(TEMINT, Schmidt-Atzert & Bühner, 2002). The TEMINT is an
abil-
ity EI test originally written in German, and recently translated
13. to
English (Amelang & Steinmayr, 2006). It provides scenarios in
which participants rate the emotions of a target person in each
of 12 situations. It was specifically developed as a measure of
abil-
ity EI, and research indicates that its relationship to personality
and cognitive intelligence are similar to those of the MSCEIT
(Knapp-Rudolph, 2003; Schmidt-Atzert, 2002) and it has good
con-
struct and criterion validity (Blickle et al., 2009). Despite the
TEMINT being a fairly new measure of ability EI, it was
deemed
appropriate for this particular study because of the format,
which
asks participants to rate the feelings of an individual in a
described
scenario. Given that this study required participants to identify
fa-
cial affect in an inspection time task, it seemed that an ability
EI
measure which did not call for participants to identify emotions
in photographs would be an appropriate measure.
4. Results
4.1. Descriptive statistics and gender differences
Internal reliabilities for all of the scales were assessed using
Cronbach’s alpha. All of the scales showed acceptable alpha
levels
Table 3
Sex and group specific means and t-test results for personality,
trait and ability EI.
Female Male t df S
14. Ability EI 44.23
(10.45)
46.73
(11.97)
�0.97 0.84 0
Trait EI 123.52
(12.34)
122
(13.25)
0.52 0.85 0
of above .70. Descriptive statistics for age, trait and ability EI,
and
the percentage correct score for each picture category are shown
in Table 2. The mean age for the sample (N = 87) was 22.91
(SD = 3.63), with 61 females and 26 males, of whom 41 were
Cau-
casian and 46 were Far-East Asian. Scores on the inspection
time
task are given as a percentage correct score for each emotion:
an-
gry (percenta), happy (percenth), and sad (percents). Both of the
races are indicated in combination with both sexes: Asian
female
(a_f), Asian male (a_m), Caucasian female (c_f) and Caucasian
male
(c_m). The total score is given in percentage correct due to the
dif-
ferent number of stimuli in each category and was compiled
from
all the durations. Interestingly, the range for the total Caucasian
fe-
15. male happy correct responses was only .10, with a mean percent
correct response 98.3%. In fact, all of the mean responses for
the
happy expression were over 95% correct, regardless of the race
or
sex of the stimulus face. In contrast, the mean correct
percentage
for Asian angry faces of both sexes was quite low: 53.9% for fe-
males and 65.8% for males.
An independent sample t-test was carried out in order to deter-
mine if gender differences were shown in the sample. Table 3
shows sex- and race-specific means and standard deviations.
There
were no sex differences in either trait or ability EI. Independent
sample t-tests were also carried out in order to determine if
there
were racial differences in trait and ability EI. The results
indicate
that Asian participants scored significantly higher on the
TEMINT.
However, given the reverse method of scoring on the TEMINT,
this
result indicates that the Caucasian participants had significantly
higher ability EI. Trait EI did not show any significant racial
differences.
4.2. Regression analysis
Multiple regression analysis was performed in three blocks in
order to determine the significant predictors of the total
percent-
age correct for each of the emotion IT tasks. In the first block,
sex
and age were entered as the predictor variables. In the second
block, race was added as an independent variable. The third
16. block
saw the addition of trait and ability EI as independent variables.
ig. Caucas. Asian t df Sig.
.33 40.73
(9.55)
48.87
(10.73)
�3.70 84 0.00
.61 122.37
(12.82)
123.7
(12.5)
�0.49 85 0.63
Table 5
Analysis of covariance.
Source Type III sum
of squares
df Mean
square
F Sig.
face_exp 0.15 1.39 0.10 4.37 0.03
face_exp � Trait EI 0.03 1.39 0.02 0.83 0.40
face_exp � Ability EI 0.00 1.39 0.00 0.13 0.80
17. face_exp � race 0.04 1.39 0.03 1.10 0.32
Error(face_exp) 2.74 114.15 0.02
Facerace 0.05 1.00 0.05 7.28 0.01
facerace � Trait EI 0.01 1.00 0.01 1.41 0.24
facerace � Ability EI 0.00 1.00 0.00 0.19 0.66
facerace � race 0.03 1.00 0.03 4.58 0.04
Error(facerace) 0.51 82.00 0.01
face_exp � facerace 0.02 1.69 0.01 1.15 0.31
face_exp � facerace � Trait
EI
0.01 1.69 0.01 0.74 0.46
face_exp � facerace � Ability
EI
0.03 1.69 0.02 1.71 0.19
face_exp � facerace � race 0.03 1.69 0.02 2.15 0.13
Error(face_exp � facerace) 1.28 138.57 0.01
K.P.A. DeBusk, E.J. Austin / Personality and Individual
Differences 51 (2011) 764–768 767
The only IT tasks to show any of the independent variables as
significant predictors were the Caucasian female angry and sad
faces. The Caucasian female angry total showed significant
results
for the first block, sex and age (p = .031), as well as the second
block in which race was added as a predictor (p < .001). The
Cauca-
sian female sad total displayed significant results for the second
block (p = .007). The full results can be seen in Table 4.
Overall, the results indicate that sex and race are the strongest
predictors of correct responses on the emotional IT task. Further
investigation of the standardised betas reveals that for the first
18. block of the Caucasian female angry regression, sex showed a
sig-
nificant result (b = �.253, p = .019), but age did not. In the
second
block of the regression, sex maintained its significant beta, and
race
was a significant predictor as well (b = �.429, p < .001). For
the sec-
ond model, the Caucasian female sad total, an investigation of
the
betas showed that race was the only significant predictor
(b = �.306, p < .001). However, none of the results demonstrate
trait EI or ability EI as significant predictors.
Note: Faceexp = facial expression of the stimulus; face race =
race of the stimulus;
race = race of the participants.
Table 6
Post hoc independent sample t-test examining differences in
race and emotion of the
stimuli.
t df Sig.
Asian_female_percent_angry 0.31 85.00 0.76
Asian_female_percent_happy 0.86 85.00 0.39
Asian_female_percent_sad 1.52 85.00 0.13
Asian_male_percent_angry 0.53 85.00 0.60
Asian_male_percent_happy 1.62 80.79 0.11
Asian_male_percent_sad 0.27 85.00 0.79
Caucasian_female_percent_angry 4.51 80.58 0.00
Caucasian_female_percent_happy �1.09 85.00 0.28
Caucasian_female_percent_sad 2.62 85.00 0.01
Caucasian_male_percent_angry 1.08 85.00 0.28
Caucasian_male_percent_happy 1.22 79.59 0.23
caucasian_male_percent_sad 0.81 85.00 0.42
19. 4.3. Analysis of covariance (ANCOVA)
A repeated measures ANCOVA was carried out in order to
deter-
mine if there was a significant difference between the total
correct
for each of the emotions and the race of the stimuli face, as well
as
to determine if there was a significant interaction between the
race
of the participant and the race of the stimuli. The within-
subjects
factors of the ANCOVA used were the three emotions (happy,
sad, and angry), the race of the face stimulus (Asian or
Caucasian),
and the sex of the face stimulus. The between subjects factor
were
race and sex, with trait and ability EI as covariates.
The results of the ANCOVA revealed a significant main effect
for
the race of the face stimulus (F (1, 82) = 7.28, p < .01),
indicating that
participants responded differently to the races of the face
stimulus.
The results also show a significant interaction between the race
of
face stimulus and the race of the participant (F (1, 82) = 4.58,
p < .05), which shows that participants are better able to
correctly
identify faces of their own race. The main effect displayed for
emo-
tions indicates that the emotional facial expressions differed
signif-
icantly from each other (F (1.39, 114.15) = 4.37, p < .05),
displaying
20. that some emotions were easier to correctly identify. However,
there
was not a significant interaction between the facial expression
and
the race of the participant, or between the facial expression and
the race of the face stimuli. Interestingly, there was not a
significant
interaction between facial expression, race of the participants,
and
race of the facial stimulus. Neither trait nor ability EI showed
signif-
icant effects as covariates. The full ANCOVA results can be
seen in
Table 5.
Post hoc independent samples t-tests were carried out in order
to further investigate the significant differences. The results of
the
t-test indicate that Caucasians had significantly higher mean
cor-
rect scores in identifying the Caucasian female angry and sad
faces.
Table 4
Multiple regression analysis model summary for Caucasian
female angry and sad.
R2 R2
Adj.
R2
Change
F
Change
df1 df2 Sig. F
21. change
C_f_angry
1 0.08 0.06 0.08 3.63 2 83 0.03
2 0.25 0.22 0.17 18.05 1 82 0.00
3 0.29 0.24 0.04 2.23 2 80 0.12
C_f_sad
1 0.02 �0.01 0.02 0.69 2 83 0.50
2 0.10 0.07 0.08 7.70 1 82 0.01
3 0.12 0.06 0.02 0.71 2 80 0.50
Step 1: Sex and age; Step 2: sex, age, and race; Step 3: sex, age,
race, trait EI, ability EI.
However, as can be seen in Table 6, there were no significant
differ-
ences between the races for any of the other stimuli.
Further post hoc analysis indicated that females were signifi-
cantly better at identifying the Caucasian female angry faces
(t(85) = 2.35, p < .05). None of the other stimuli showed
significant
sex differences. This result is in keeping with what was found
in
the regression analysis.
Overall, the results reveal race, both of the participant and of
the
stimulus, to be the biggest factor in correctly identifying the
emo-
tion of the target face. Surprisingly, neither trait nor ability EI
were
significant predictors of success on the inspection time task.
5. Discussion
22. Previous research has indicated that people are better able to
recognise facial affect in another person of their own race and
more
generally in in-group compared to out-group targets (Elfenbein
&
Ambady, 2002). More complex effects relating to differences in
perception of and speed of response to individual emotions in
in-
and out-group members have also been identified (Beaupré and
Hess, 2003; Dunham, 2011; Hugenberg, 2005) Since emotion
per-
ception is an important component of EI, it was hypothesised
that
high EI would be connected to better performance on an
emotion
recognition task for both own-and other-race targets and that
high
EI would reduce or remove the in-group advantage in facial
expres-
sion recognition.
768 K.P.A. DeBusk, E.J. Austin / Personality and Individual
Differences 51 (2011) 764–768
The results of the present study indicate that there was a signif-
icant difference in the number of correct responses for the
different
races of the face stimuli, as well as an interaction between the
race
of the participant and target. This result was in accordance with
previous cross-race effect research, which shows that people are
better able to identify faces of their own race (Elfenbein &
Ambady,
2002). The results also indicate that there was a significant
differ-
23. ence in the proportion of correct responses for each of the emo-
tions, as well as an interaction between the emotion and the race
of the face stimulus. Post hoc analysis showed that there were
sig-
nificant racial differences in the percentage of correct responses
for
Caucasian female angry and sad faces but percentage correct re-
sponses for the happy face stimuli were all above 96%,
regardless
of race. This is somewhat similar to the results of Hugenberg
(2005) which showed that participants recognised happy faces
fas-
ter than angry or sad faces, and is likely to be due to happiness
being readily distinguished from sadness and anger.
Surprisingly, neither trait nor ability EI were significant predic-
tors of the percentage of correct responses for any stimulus
type,
contradicting previous findings of an EI effect on emotional
face
inspection time performance (Austin, 2005), and there was no
indi-
cation of a reduction of in-group bias (where found) related to
EI.
The results suggest that sex and ethnicity are the factors which
determine how well an individual is able to identify facial affect
in the inspection time task. Further examination of the effects
of
EI in cross-group emotion perception is indicated, given the
gen-
eral expectation that EI should be related to more effective pro-
cessing of emotional information. This could involve examining
different kinds of emotion perception task, including the use of
vo-
cal as well as picture stimuli. The use of stimuli with a greater
vari-
24. ety and/or subtlety of emotions than in the present study, i.e.
changes which would make the emotion identification task more
challenging, could also be examined. Speed as well as accuracy
in
emotion identification could also be investigated by using a
reac-
tion time paradigm. This would be of interest since it is possible
that the use of an inspection time paradigm with a small number
of fixed durations of exposure to the stimuli may have obscured
more complex effects in the time taken to process different
stimuli.
Another area where the putative EI effects on cross-group emo-
tion perception could be investigated would be using more
ecolog-
ically valid tasks, for example emotional/social perception tasks
employing video scenes involving in-group or out-group
members.
The use of such tasks would allow the examination of any
effects of
EI in situations similar to those encountered in real life, where
information on emotional states from multiple channels (face,
voice, gesture, etc.) has to be integrated rapidly.
References
Amelang, M., & Steinmayr, R. (2006). Is there a validity
increment for tests of
emotional intelligence in explaining the variance of
performance criteria?
Intelligence, 34, 459–468.
Austin, E. J. (2005). Emotional intelligence and emotional
information-processing.
Personality and Individual Differences, 39, 403–414.
Beaupré, M. G., & Hess, U. (2003). In my mind, we all smile: A
25. case of in-group
favouritism. Journal of Experimental Social Psychology, 39,
371–377.
Bernstein, M. J., Young, S. G., & Hugenberg, K. (2007). The
cross-category effect.
Mere social categorization is sufficient to elicit an own-group
bias in face
recognition. Psychological Science, 18, 706–712.
Blickle, G., Momm, T., Kramer, J., Mierke, J., Liu, Y., &
Ferris, G. R. (2009). Construct
and criterion-related validation of a measure of emotional
reasoning skills: A
two study investigation. International Journal of Selection and
Assessment, 17,
101–118.
Chapman, B. P., & Hayslip, B. (2005). Incremental validity of a
measure of emotional
intelligence. Journal of Personality Assessment, 85, 154–169.
Dunham, Y. (2011). An angry = outgroup effect. Journal of
Experimental Social
Psychology, 47, 668–671.
Ekman, P. (1968). Research findings on recognition and display
of facial behaviour
in literate and nonliterate cultures. Proceedings of the 76th
Annual Convention of
the American Psychological Association, 3, 727.
Elfenbein, H. A., & Ambady, N. (2002). On the universality and
cultural specificity of
emotion recognition: A meta-analysis. Psychological Bulletin,
128, 203–235.
26. Elfenbein, H. A., & Ambady, N. (2003). Universals and cultural
differences in
recognizing emotions. Psychological Science, 12, 159–164.
Hugenberg, K. (2005). Social categorization and the perception
of facial affect:
Target race modifies the response latency advantage for happy
faces. Emotion, 5,
267–276.
Johnson, K. J., & Fredrickson, B. L. (2005). We all look the
same to me. Positive
emotions eliminate the own-race bias in face recognition.
Psychological Science,
16, 875–881.
Knapp-Rudolph, M. (2003). ITEM—Intelligenztest zur
Erfassung von emotions-
modulation. Heidelberg: Unpublished Master Thesis.
Mayer, J. D., Roberts, R. D., & Barsade, S. G. (2008). Human
abilities: Emotional
intelligence. Annual Review of Psychology, 59, 507–536.
Meissner, C. A., & Brigham, J. C. (2001). Thirty years of
investigating the own-race
bias in memory for faces. A meta-analytic review. Psychology,
Public Policy, and
Law, 7, 3–35.
Michel, C., Caldara, R., & Rossion, B. (2006). Same-race faces
are perceived more
holistically than other-race faces. Visual Cognition, 14, 55–73.
Petrides, K. V., & Furnham, A. (2003). Trait emotional
27. intelligence. Behavioural
validation in two studies of emotion recognition and reactivity
to mood
induction. European Journal of Personality, 17, 39–57.
Petrides, K. V., Pita, R., & Kokkinaki, F. (2007). The location
of trait emotional
intelligence in personality factor space. British Journal of
Psychology, 98,
273–289.
Saklofske, D. H., Austin, E. J., & Minski, P. S. (2003). Factor
structure and validity of a
trait emotional intelligence measure. Personality and Individual
Differences, 34,
707–721.
Schmidt-Atzert, L. & Bühner, M. (2002). Entwicklung eines
Leistungtests zur
Emotionalen Intelligenz. In 43rd Kongress der Deutschen
Gesellschaft für
Psychologie. Berlin 22nd–26th September.
Schutte, N. S., Malouff, J. M., Hall, L. E., Haggerty, D. J.,
Cooper, J. T., Golden, C. J., et al.
(1998). Development and validation of a measure of emotional
intelligence.
Personality and Individual Differences, 25, 167–177.
Tottenham, N., Tanaka, J., Leon, A. C., McCarry, T., Nurse, M.,
Hare, T. A., et al. (2009).
The NimStim set of facial expressions: judgments from
untrained research
participants. Psychiatry Research. <http://www.macbrain.org>.
Van der Schalk, J., Fischer, A., Doosje, B., Wigboldus, D.,
28. Hawk, S., Rotteveel, M., et al.
(2011). Convergent and divergent responses to emotional
displays of ingroup
and outgroup. Emotion, 11, 286–298.
Walker, P. M., & Tanaka, J. W. (2003). An encoding advantage
for own-race versus
other-race faces. Perception, 32, 1117–1125.
http://www.macbrain.orgEmotional intelligence and social
perception1 Introduction2 Pilot study2.1 Pilot participants2.2
Pilot Measures2.2.1 NimStim Stimuli2.3 Pilot results3 Main
study methods3.1 Participants3.2 Facial affect perception
inspection time task3.3 Trait EI3.4 Ability EI4 Results4.1
Descriptive statistics and gender differences4.2 Regression
analysis4.3 Analysis of covariance (ANCOVA)5
DiscussionReferences