1. Rewards, Detection and Dishonesty:
A Laboratory Experiment in India
Mehak Kaushik
University of Massachusetts, Amherst
Varsha Singh
IIT Delhi
Sujoy Chakravarty
Jawaharlal Nehru University
2. Theories of Crime
• Becker (1968): Cheat more if rewards are higher, probability of detection is lower
and the penalty for cheating is low.
• SMORC: A fully rational individual would commit a crime as long as the (expected)
marginal benefit exceeds the (expected) marginal cost of committing it.
• Moral balance or self-concept maintenance type of theories (Nisan, 1991; Mazar,
2008)
• Individuals cheat up to the point that lowers their self-concept. This is individual
and each has a potentially different “fudge” factor.
• These latter theories add internal individual specific moral curbs (such as shame,
remorse, etc) whereas the rational theories attempt to generalize the behaviour of
representative agents purely using external material benchmarks.
3. Our study
• In this study [similarities with real-effort studies such as Mazar et al. (2008), Friesen
and Gangadharan (2010), Yaniv and Siniver (2016)] we focus on two fundamental
variables that are theorized to affect dishonest behaviour: financial reward and
likelihood of detection.
• Of these, most studies on dishonesty (including real-effort, reporting die-roll, coin-
flip) obtain that increasing probability of detection lowers cheating.
• However Mazar et al. (2008) find that increasing piece-rate reward from $ 0.50 to $
2.00, actually lowers the amount of cheating though the effect is statistically
insignificant.
• We compare effects of reward (piece-rate payment vs. no piece-rate payment) and
probability of detection (shred vs. no shred), and hypothesize a difference in their
joint effects on deception.
4. Experiment: Basic Procedure
• Participants fill in questionnaire that records
demographic information and other individual attributes
• They are then given ten puzzle tasks that test their spatial
logic skills. Why these types of problems?
• They are given five minutes to solve these problems.
• When five minutes are over, members of one group
shred the puzzle cum answer sheets and self-report in
private how many puzzles they solved out of ten.
• They then submit their self-report forms to get paid by an
experimenter in private and exit the venue.
• Members of the other group self-report the number they
solved also in private and then submit their puzzle sheets
which are not scrutinized along with their self-report
forms to an experimenter.
• They then get paid in private before leaving the venue.
5. Experiment: Design of treatments
NP/NS: Subjects get the show-up fee. They do not earn for problems they self-
report to have solved, and their puzzle sheet is submitted to the experimenter
NP/S: Subjects get the show-up fee. They do not earn for problems they self-report
to have solved, and their puzzle sheet is shredded in public
P/NS: Subjects get the show-up fee. They earn Rs. 50 for every problem they self-
report to have solved, and their puzzle sheet is submitted to the experimenter
P/S: Subjects get the show-up fee. They earn Rs. 50 for every problem they self-
report to have solved, and their puzzle sheet is shredded in public
Treatment Condition
No. of
participants
Show up
(flat) fee Piece Rate payment
Shred answer
sheet
No Piece-rate/No Shred (NP/NS) 60 Rs. 250 None No
No Piece-rate/Shred (NP/S) 82 Rs. 250 None Yes
Piece-rate/No Shred (P/NS) 60 Rs. 250 Rs. 50 per reported solved No
Piece Rate/Shred (P/S) 82 Rs. 250 Rs. 50 per reported solved Yes
Table 1: Number of observations for four treatment groups
6. Subject pool
• The experiment is conducted using 284 undergraduate students from
ten institutions in New Delhi.
• The subjects in our sample are from humanities, social sciences and
commerce. The average age of participants is 19.5 years and 55
percent are women.
• In each college, the participants are randomly allocated between the
P and NP conditions, in two sessions of 15 participants each.
• For the P condition participants, four sessions are randomly assigned
to the no-shred (NS) condition.
• The same rule is followed for the NP sessions where four are
randomly designated to be NS, giving us a total of 120 participants
who have to turn in their puzzle sheets after they self-report their
score.
• Sessions at one institution are conducted in different spatially
separated classrooms at the same time to avoid participants from one
session informing participants of the other session of the specifics
pertaining to the conditions.
• The cohort size was 30 for 9 institutions. In the tenth, due to subjects
not showing up we ran the experiments with 14 (7 P and 7 NP)
participants.
7. Financial incentives
• Simultaneous sessions are conducted in each institution with participants
who receive (P)/don’t receive (NP) an additional piece rate payment.
• After the filled demographic questionnaires are collected, the P individuals
are informed of the task and an additional opportunity to earn Rs 50/- for
each puzzle they solve.
• In other words, their payoff function is 250+50*X, where X is the number of
puzzles that they solve. A participant can thus potentially earn Rs. 750/-
(maximum amount) if s/he reports to have solved all of the ten puzzles.
• The maximum payment of Rs. 750 is approximately equivalent to US$ 42
using PPP exchange rates between India and the USA in 2017 (1 US$ = Rs.
17.81), as given in the World bank, International Comparison Program
database, https://data.worldbank.org/indicator/pa.nus.ppp
• The stakes for fully rational cheating would be considered to be moderate to
high in a developing country given the age of the participants and the time
of engagement.
8. Experiment: Detection of Dishonesty
• The puzzles are actually unsolvable, thus an individual telling the truth should report
zero solved and pick up the flat participation fee and leave the experiment.
• Given that the time allowed is small and considering the complexity of the tasks none
of our subjects understand that the puzzles do not have a solution.
• A central objective of our research is to identify dishonesty at the individual level. Thus
it was important to keep the unsolvable nature of our puzzles undeclared to the
participants as if they found this out, it would lead them to not reveal their preferences
with respect to cheating.
• The only other studies that use unsolvable puzzles are Mazar et al. (2008) and Friesen
and Gangadharan (2012)
9. Experiment: Other fun facts
• To our knowledge ours is the only study other than Charness et al. (2019) that
investigates if subjects are dishonest in the absence of incentives and when their
decision has no payoff externalities for other participants in the experiment.
• Friesen and Gangadharan’s (2012) experiment is very similar to the P/NS treatment
in our study.
• In our experiment, we can correlate dishonest behaviour with age, gender, a proxy
for ability, self-reported relative economic status and dissatisfaction with one’s
economic condition and use these individual attributes to more robustly test the
effects of our treatment variables.
• Our aspiration and wealth comparison variables are inspired by Ray (2006) and
Genicot and Ray (2017).
10. Experiment: Some precautions
• To minimize perceived experimenter demand effects that would skew
decision making towards social desirability (Zizzo, 2010), research
students (from the authors’ institutions) conduct the sessions and no
faculty member is involved in the data collection protocol.
• We wanted to create an environment where none of the participants
would feel that the decision to cheat would ever lead to any
reputational consequences for their academic careers or personal
lives.
• Moreover, these reputational factors are deemed to be important in
determining an agent’s preferences towards dishonesty as
documented by Hao and Houser (2017) and Yaniv and Siniver (2016).
• On the other hand we did not want in any way to encourage cheating,
which would end up framing dishonesty as completely acceptable and
giving us a manipulated measure of subject preferences.
11. Aggregate cheating
• Approximately 55 percent of the subject pool (155 out of 282 usable responses) lie about the number
they solved, i.e. -reported solving more than zero problems.
• The distribution of dishonesty in our study is skewed towards many individuals cheating in small
magnitudes rather than a few individuals cheating by large amounts. Of the dishonest responses,
approximately 76 percent of the participants report solving 1, 2 or 3 problems but only about 10
percent report solving five or more problems.
• Minor transgressions of mostly honest people?
• The average reported solved is 1.33 problems, which is different from zero at the 1 percent (Wilcoxon
test, two-tailed p-value = 0.0000).
• So on average over all our treatments, participants report a little over 1 problem more than they could
do (they should truthfully be reporting zero), which is in the range of what Mazar et al. (2008) find.
127
59
36
23 21
9 5 1 0 1 0
0
20
40
60
80
100
120
140
0 1 2 3 4 5 6 7 8 9 10
Reported Solved
Frequency
Figure 1: Distribution of cheating
12. • Result 1: A little over half the participants
cheat by over-reporting the number of
problems they solve. About three quarters of
these individuals cheat by small amounts
while approximately 10 percent cheat by a
large amount.
13. Cheating by treatment
57.63
62.96
36.67
58.54
0
10
20
30
40
50
60
70
NP/NS NP/S P/NS P/S
%
who
cheated
Figure 3: Percentage of individuals who cheated by treatment combination
1.58 1.58
0.67
1.39
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8 NP/NS
NP/S
P/NS
P/S
Average
Reported
Solved
Figure 4: Averages of reported solved by treatment combination
14. CI Plots of Dishonesty Averages
Figure 4: Percentage of individuals who
cheated by treatment combination
Figure 5: Averages of reported solved by
treatment combination
16. Interesting asides
• Our findings are quite different from Charness et al. (2019) who obtain
that in the absence of piece-rate monetary incentives, individuals do not
display significant dishonesty.
• However, it is difficult to directly compare our results to theirs as the task
in Charness et al. (2019) is to self-report the roll of a ten-sided die.
• Though our study, which uses only one set of tasks cannot be used to
verify this, but we conjecture that individuals may be more willing to lie
about intellectual or scholastic achievement or ability as in our
experiment (and that of Mazar et al., 2008; Freisen and Gangadharan,
2012; Yaniv and Siniver, 2016), even in the absence of incremental
financial rewards.
• In a sense, they are indulging in mostly harmless resume padding which
does not have any financial implications. In contrast when the rewards are
purely material (lying to misreport a random die roll) individuals appear to
not be intrinsically motivated to lie when there is no actual reward that
comes from it.
17. Interesting asides (2)
• We speculate that perhaps cheating by a small amount which is seen in all treatment
conditions (including those with potential post-facto verification) may be because lying in
order to inflate one’s performance by small amount is felt to be normatively acceptable
for test taking in a country such as India where academic dishonesty is widespread and
reported by the media.
• Particularly interesting are the P/NS and NP/NS treatments where 36 % and 58 %
individuals cheat, especially in the former where they lie for piece rate incentives even if
their (mis)deed can be ex-post verified.
• Following Banerjee (2016), who also conducts his experiments in India, experimental
subjects may feel that it is normatively acceptable to lie albeit by a small amount when
participating in a task that resembles an academic “test.”
• Recent work that links emotions to deception suggests that deception is contingent on
disinhibition and it enables pursuit of self-interest (Yip and Schweitzer, 2016).
• We conjecture that perhaps among the middle and upper classes of a developing nation
such as India, a positive self-view is not inconsistent with one’s ability to “outsmart”
others, i.e. - get away with cheating, especially if it has low financial consequences.
• For NP/NS treatments, a real world parallel could be low-financial stake, semi-verifiable
resume padding that individuals use to get social favours and employment, such as playing
up their association with famous individuals, over-hyping minor achievements or mildly
overstating their qualifications in resumes and public contexts.
18. Treatment effects
Contrast (2 group) Average Cheating No. of observations
Wilcoxon (z)
stat p-value
NP vs. P NP: 1.58 | P: 1.08 NP: 140 / P: 142 2.419** 0.0156
NS vs. S NS: 1.12 |S: 1.48 NS: 119 /S: 163 -2.283** 0.0224
‘*’, ‘**’, ‘***’ significant at the 10, 5 and 1 percent levels respectively
Table 2: Group comparisons between high and low groups for detection and piece-rate payment.
19. Treatment combinations
Contrast (4 group) Proportion who cheated
Wald test
(F) stat Wald p-value
Dunn test (χ2)
stat
Dunn p-
value
(1) NP/NS vs. NP/S NP/NS: 0.58| NP/S: 0.63 0.40 0.5263 -0.63 0.2658
(2) NP/NS vs. P/NS NP/NS: 0.58| P/NS: 0.37 5.41 0.0207** 2.29 0.0109**
(3) NP/NS vs. P/S NP/NS: 0.58 | P/S: 0.59 0.01 0.9137 -0.11 0.4574
(4) NP/S vs. P/NS NP/S: 0.63 | P/NS: 0.37 9.87 0.0019*** 3.10 0.0010***
(5) NP/S vs. P/S NP/S: 0.63| P/S: 0.59 0.33 0.5658 0.57 0.2854
(6) P/NS vs. P/S P/NS: 0.37 | P/S: 0.59 6.86 0.0093*** -2.58 0.0049***
‘*’, ‘**’, ‘***’ significant at the 10, 5 and 1 percent levels respectively
Table 3: Comparisons between our four treatment conditions for the proportion who cheated
Contrast (4 group) Average Cheating
Wald test
(F) stat Wald p-value
Dunn test (χ2)
stat
Dunn p-
value
(1’) NP/NS vs. NP/S NP/NS: 1.576| NP/S: 1.580 0 0.9887 -0.36 0.3579
(2’) NP/NS vs. P/NS NP/NS: 1.576 | P/NS: 0.67 9.16 0.0027*** 2.9 0.0019***
(3’) NP/NS vs. P/S NP/NS: 1.576 | P/S: 1.39 0.44 0.5068 0.27 0.3908
(4’) NP/S vs. P/NS NP/S: 1.58 | P/NS: 0.67 10.71 0.0012*** 3.49 0.0002***
(5’) NP/S vs. P/S NP/S: 1.58 | P/S: 1.39 0.55 0.46 0.7 0.242
(6’) P/NS vs. P/S P/NS: 0.67 | P/S: 1.39 6.75 0.0099*** -2.85 0.0022***
‘*’, ‘**’, ‘***’ significant at the 10, 5 and 1 percent levels respectively
Table 4: Comparisons between our four treatment conditions for average cheating
20. Adding in individual controls
Variable Description Average Min Max
Dishonesty [D,
Dependent]
Takes the value 1 if subject
reported solving > 0, 0
otherwise
0.54 0 1
Magnitude of Dishonesty
[MagD, Dependent]
The number of puzzles the
subject reported to have solved
1.30 0 9
Age Age in years 19.50 17 25
Female Takes the value 1 if female, 0
otherwise
0.55 0 1
Piece-Rate Takes the value 1 if the subjects
are paid piece rate, 0 otherwise
0.5 0 1
Shred Takes the value 1 if the subjects
are allowed to shred their
problem sheet, 0 otherwise
0.73 0 1
Marks12 Percentage marks secured in
XIIth grade school board
examination
86.84 57.40 97.25
Economic Satisfaction Happy with the current
economic situation prevailing in
the family. Scale variable: 1.
Not at all, 2. Not entirely 3. Yes
2.32 1 3
Peer Comparison Own economic condition
compared to peers. Scale
variable: 1. Worse, 2. Same 3.
Better
2.23 1 3
Table 5: Variables used in regression analysis
23. Regression approach
• We regress the magnitude of dishonesty against our treatment variables and
other controls
MagD= 𝛃’X + u (i)
• Hurdle regression approach as employed to analyze dictator games like in
Engel (2011) and Banerjee and Chakravarty (2014).
• Splits up the decision to cheat into a binary process (cheat /don’t cheat)
followed by a decision regarding the magnitude of cheating conditional on
deciding to cheat.
• First stage: Binary Logit. Second stage: conditional OLS.
p(D = 1) =
𝑒𝛽′ 𝑋
1+𝑒𝛽′ 𝑋
(ii)
MagD = 𝛃’X + u, if MagD> 0 (iii)
25. • Result 2: Increasing piece-rate incentives lower
both the likelihood and magnitude of cheating,
whereas reducing the probability of detection of
cheating only increases the magnitude of
cheating for the group paid piece-rate incentives.
•
• Result 3: Feeling wealthier than one’s peers
increases one’s likelihood of cheating, whereas
feeling more satisfied with one’s current
economic state lowers the magnitude of
cheating.
27. College Random Effects
Independent
variables
Dep: MagD
OLS
(1)
Dep: D= 0/1
Binary Logit
[Coefficients]
(2)
Dep: MagD, given
D=1, Conditional
OLS
(3)
Age 0.02
(0.097)
-0.0003
(0.12)
0.09
(0.10)
Female 0.20
(0.21)
0.46
(0.28)
-0.13
(0.17)
Piece Rate -0.97***
(0.12)
-0.92***
(0.40)
-1.23***
(0.19)
Shred 0.09
(0.26)
0.07
(0.48)
0.21
(0.33)
Piece-Rate*Shred 0.92***
(0.25)
0.83
(0.53)
1.03**
(0.50)
Marks12 -0.04***
(0.01)
0.0005
(0.026)
-0.07***
(0.009)
Eco. Satisfaction -0.37***
(0.13)
-0.15
(0.21)
-0.51***
(0.185)
Peer Comparison 0.55***
(0.20)
0.52**
(0.21)
0.31**
(0.15)
Constant 4.27
(2.72)
-0.67
(3.8)
7.52***
(2.48)
Number of
Observations
264 266 144
Pseudo R2/ R2 0.14 0.224
Notes: Standard errors in parentheses (Robust s. e. adjusted in 9 clusters for OLS)
‘*’, ‘**’, ‘***’ significant at the 10, 5 and 1 percent levels respectively
28. Further study
• We observe that individuals lie significantly in our problem-solving
experiment when they are not paid piece-rate rewards but not in the die-roll
experiments of Charness et al. (2019).
• This indicates that the task context is important in predicting cheating
independent of rewards.
• Future research could explore this in a systematic manner with different
cheating contexts.
• Second, the fact that a few individuals in our sample lie about their
performance to gain financial rewards even when this can be post-facto
verified, may indicate that to some extent, social norms regarding what
constitutes an acceptable level of dishonesty are at play.
• Future studies on dishonesty should definitely conduct belief elicitation
exercises to study the relationship between dishonest action and prevailing
social norms regarding morally appropriate behaviour.