How Did I Miss That Bug?
	Overcome Cognitive Bias in Testing
with Gerie Owen
#howdidimissthatbug
XBOSoft
Dedicated to Software Quality Improvement
Founded in 2006
We speed products to market with our expert:
•  Software QA consulting
•  Software testing
Global team with offices in San Francisco & Beijing
“Thorough, accurate and fast”
2
House Rules
§  Participants other than the speakers are muted
§  Ask questions in the GoToWebinar control on the
right side of your screen or through Twitter
@XBOSoft
§  Questions may be asked throughout the webinar -
we’ll try to answer them at the end
§  You’ll receive info on recording after the webinar
Webinar Hashtag: #TestCaseTips
3
Meet Our Speakers
Philip Lew
CEO and Founder, XBOSoft
• Relevant specialties and
passions
o  Software quality process,
evaluation, measurement
and improvement
o  Software quality in use / UX
design
o  Mobile User Experience and
usability
o  Cycling and travel
4
Gerie Owen
•  Test Architect
•  Test Lead, Tester and as such
experienced bug finder and
bug misser
•  Subject expert on testing for
TechTarget’s
SearchSoftwareQuality.com
•  International and Domestic
Conference Presenter
•  Marathon Runner & Running
Coach
•  Cat Mom
www.gerieowen.com
gerie.owen@gerieowen.com
5
Presenta-on	Agenda	
•  Why	are	we	talking	about	missed	bugs?	
•  What	is	a	missed	bug?	
•  How	do	we	miss	bugs?	
•  How	do	we	think?	
–  System	1	and	System	2	thinking	
–  Cogni-ve	Biases	Affec-ng	Tes-ng	
•  How	does	this	apply	to	Missing	Bugs	
–  Managing	Cogni-ve	Bias	for	 		
•  Testers	
•  Test	Leads	and	Managers	
•  Our	Profession	 6
Why	are	we	talking	about	missing	
bugs?	
•  Have	you	ever	missed	a	bug?	
•  Have	you	ever	been	asked	how	you	missed	a	
bug?	
•  Have	you	ever	wondered	how	you	missed	a	
bug?	
7
Consequences	of	Missed	Bugs	
•  Possible	Consequences	of	Missed	Bugs:	
– Nega-ve	Publicity	
– Lost	Sales	
– Lost	Customers	
– Even	Loss	of	Life	
	
MISSED	BUGS	CAUSE	MAYHEM	
	 		
8
My	Journey	
The	“HOW”	is	more	important	than	the	“WHY”	
	
And	now	,	I	invite	you	to	join	with	me		
into	the	journey	of	
	
How	Did	I	Miss	that	Bug?	
9
How	Do	We	Miss	Bugs?	
•  Missed	test	cases	
•  Misunderstanding	of	requirements	
•  Misjudgment	in	risk-based	tes-ng	
•  Inaen-on	
•  Fa-gue	
•  Burnout	
•  Mul--tasking	
10
How	Do	We	Test?	
•  What	is	So_ware	Tes-ng?	
– So_ware	tes-ng	is	making	judgments	about	the	
quality	of	the	so_ware	under	test	
– Involves:	
•  Objec-ve	comparisons	of	code	to	
specifica-ons,	
•  AND	
•  Subjec-ve	assessments	regarding	usability,	
func-onality	etc	
11
What	IS	a	Missed	Bug?	
		
An	Error	in	Judgment!	
	
To	determine	how	testers	miss	bugs,	we	need	to	
understand	how	humans	make	judgments,	
especially	in	complex	situa-ons.	
	
12
HOW	DO	WE	THINK?	
13
How	do	we	make	judgments?	
•  Thinking,	Fast	and	Slow	–	Daniel	Kahneman	
	
– System	1	thinking	–	fast,	intui-ve,	and	some-mes	
wrong	
– System	2	thinking	–	slower,	more	deliberate,	more	
accurate	
14
System	1	vs.	System	2	Thinking	
•  System	1	thinking	keeps	us	func-oning	
–  Fast	decisions,	usually	right	enough	
–  Gullible	and	biased	
	
•  System	2	makes	deliberate,	thoughful	decisions	
–  It	is	in	charge	of	doubt	and	unbelieving	
–  But	is	o_en	lazy	
–  Difficult	to	engage	
15
How	Do	We	Apply	System	1	and	System	2	
Thinking?	
•  System	1	thinking:	
–  Is	applied	in	our	ini-al		reac-ons	to	situa-ons.		
–  May	employ	Heuris-cs	or	rules	of	thumb	
•  System	2	thinking:	
–  Is	applied	when	we	analyze	a	problem,	for	
example	when	calcula-ng	the	answer	to	a	math	
problem.			
•  System	1	and	System	2	can	be	in	conflict:	
–  		lead	to	biases	in	decision-making.			
	
16
How	do	Biases	Impact	Tes-ng?	
•  We	maintain	certain	beliefs	in	tes-ng	prac-ce	
– Which	may	or	may	not	be	factually	true	
– Those	biases	can	affect	our	tes-ng	results	
– We	may	be	predisposed	to	believe	something	that	
affects	our	work	and	our	conclusions	
	
•  How	do	bias	and	error	work?	
– We	may	test	the	wrong	things	
– Not	find	errors,	or	find	false	errors	
17
18
The	Representa-ve	Bias	
•  Happens	when	we	judge	the	likelihood	of	an	
occurrence	in	a	par-cular	situa-on	by	how	
closely	the	situa-on	resembles	similar	situa-ons.			
	
•  Testers	may	be	influenced	by	this	bias	when	
designing	data	matrices,	perhaps	not	tes-ng	data	
in	all	states	or	not	tes-ng	enough	types	of	data.		
•  Case	Study:		Ability	to	print	more	than	once	bug		
19
The	Curse	of	Knowledge	
•  Happens	when	we	are	so	knowledgeable	about	
something,	that	our	ability	to	address	it	from	a	less	
informed,	more	neutral	perspec-ve	is	diminished.		
	
•  When	testers	develop	so	much	domain	knowledge	
that	they	fail	to	test	from	the	perspec-ve	of	a	new	
user.		Usability	bugs	are	o_en	missed	due	to	this	
bias.	
•  Case	Study:		Date	of	Death	Bug	
20
The	Congruence	Bias	
•  The	tendency	of	experimenters	to	plan	and	execute	tests	on	
just	their	own	hypotheses	without	considering	alterna-ve	
hypotheses.			
	
•  This	bias	is	o_en	the	root	cause	of	missed	nega-ve	test	
cases.		Testers	write	test	cases	to	validate	that	the	
func-onality	works	according	to	the	specifica-ons	and	
neglect	to	validate	that	the	func-onality	doesn’t	work	in	
ways	that	it	should	not.	
•  Case	Study:		Your	nega-ve	test	case	or	boundary	miss	
21
The	Confirma-on	Bias	
•  The	tendency	to	search	for	and	interpret	informa-on	
in	a	way	that	confirms	one’s	ini-al	percep-ons.		
	
•  Testers’	ini-al	percep-ons	of	the	quality	of	code,	the	
quality	of	the	requirements	and	the	capabili-es	of	
developers	can	impact	the	ways	in	which	they	test.		
•  Case	Study:			Whose	code	to	you	test	the	most	
thoroughly?	
22
The	Anchoring	Effect	
•  The	tendency	to	become	locked	on	and	rely	
too	heavily	on	one	piece	of	informa-on	and	
therefore	exclude	other	ideas	or	evidence	that	
contradicts	the	ini-al	informa-on.	
	
•  So_ware	testers	do	this	o_en	when	they	
validate	code	to	specifica-ons	exclusively	
without	considering	ambigui-es	or	errors	in	
the	requirements.	
23
24
Inaen-onal	Blindness	
•  Chabris	and	Simon	conducted	experiments	on	how	
focusing	on	one	thing	makes	us	blind	to	others	
– Invisible	gorilla	on	the	basketball	court	
– Images	on	a	lung	x-ray	
25
Inaen-onal	Blindness	
•  Psychological	lack	of	aen-on	
	
•  Tendency	to	miss	obvious	
inconsistencies	when	focusing	
specifically	on	a	par-cular	
task.		
		
•  This	happens	in	so_ware	
tes-ng	when	testers	miss	the	
blatantly	obvious	bugs		
26
Why	Do	We	Develop	Biases?	
•  The	Blind	Spot	Bias	
–  We	evaluate	our	own	decision-making	process	differently	
than	we	evaluate	how	others	make	decisions.			
•  West,	Meserve	and	Stanovich	
	
27
HOW	DO	WE	FIND	MORE	BUGS?	
28
How	Does	This	Apply	To	Missing	
Bugs?	
•  We	must	manage	the	way	we	think	throughout	
the	test	process.	
	
– As	individual	testers	
– As	test	managers	
– As	a	professional	community	
29
How	Can	Testers	Manage	Their	
Thought	Processes?	
•  Use	more	System	1	thinking?	
OR	
•  Use	more	System	2	thinking?	
30
Test	Methodology	and		
System	2	Thinking	
•  Test	methodology	is	the	analy-cal	framework	of	tes-ng;	it	
invokes	our	system	2	thinking	and	places	the	tester	under	
cogni-ve	load.	
	
•  The	determina-on	of	whether	the	actual	results	match	the	
expected	results	becomes	an	objec-ve	assessment.	
	
	
31
How	Do	We	Find	Bugs?	
Focus	on	System	1	thinking,	intui-on	and	
emo-on	
32
Focus	On	System	1	Thinking	
•  Heuris-cs	used	with	Oracles	
•  Recognize	our	emo-ons	as	indicators	of	poten-al	bugs	
•  Exploratory	Tes-ng	
33
The	Power	of	Exploratory	Tes-ng	
•  Exploratory	tes-ng	is	simultaneous	learning,	
test	design,	and	test	execu-on	
•  Exploratory	testers	o_en	use	tools	
– record	of	the	exploratory	session	
– generate	situa-ons	of	interest	
34
The	Characteris-cs	of	Exploratory	Tes-ng	
•  Planned	
•  Learning	experience	
•  Discovery	process	
•  Different	for	each	applica-on	
35
How	Should	We	Use	Exploratory	
Tes-ng?	
•  Unstructured	
– Before	beginning	test	case	execu-on	
•  Minimizes	preconceived	no-ons	about	the	
applica-on	under	test	
– Oracle	based	
•  Users’	perspec-ves	
•  Data	flow	
36
How	Should	We	Use	Exploratory	
Tes-ng?	
•  Structured	
– Use	to	create	addi-onal	test	cases	
•  May	be	done	earlier,	possible	as	modules	are	
developed	
– Session-Based	
•  Time-boxed	charters	
•  Mul-ple	testers		
•  Post	test	review	session	
37
Planning
•  What	are	situa-ons	of	interest?	
–  Usual	tasks	performed	by	the	user	
–  Things	that	a	user	might	do	
–  Things	covered	by	charter	
•  Document	the	plan	
–  It’s	not	ad	hoc	tes-ng	
•  Deviate	from	the	plan	as	
necessary	
38
Learning Experience
•  What	can	I	learn	about	the	so_ware?	
–  What	all	the	buons	and	forms	do	
–  How	it	wants	the	user	to	work	
–  Strengths	and	weaknesses	
39
Discovery
•  What	are	situa-ons	of	interest?	
–  Usual	tasks	performed	by	the	user	
–  Things	a	user	might	do	
•  Does	it	behave	as	expected?	
–  If	not,	lets	explore	
–  And	document	
40
What	Can	Test	Managers	Do?	
•  Foster	an	environment	in	which	the	testers	
feel	comfortable	and	empowered	to	use	
System	1	thinking.	
	
–  Plan	for	exploratory	tes-ng	in	the	test	schedule	
–  Encourage	Testers	to	take	risks	
–  Reward	for	Quality	of	bugs	rather	than	quan-ty	of	test	
cases	executed	
41
What	Can	the	QA	Profession	Do?	
A	Paradigm	Shi,		
–  Shi_	our	focus	from	requirements	coverage	based	test	execu-on	to	a	more	
intui-ve	approach	
–  Exploratory	tes-ng	and	business	process	flow	tes-ng	becomes	the	norm	
rather	than	the	excep-on	
–  Develop	new	tes-ng	frameworks	where	risk-based	tes-ng	is	executed	through	
targeted	exploratory	tes-ng	and	is	balanced	with	scripted	tes-ng	
– Our	purpose	should	be	providing	informa-on	
versus	finding	bugs	
42
Ques-on	Test	Results	
	•  Is	there	any	reason	to	suspect		we	are	
evalua-ng	our	test	results		based	on		self-
interest,	overconfidence,	or	aachment	to	
past	experiences?		
•  Have	we	fallen	in	love	with	our	test	results?	
•  Were	there	any	differences	of	opinion	among	
the	team	reviewing	the	test	results?	
	
43
How	Do	We	Find	Bugs?	
•  Focus	less	
•  Use	intui7on	
•  Believe	what	we	can’t	believe	
44
References	
•  Ariely,	D.,		2009.	Predictably	Irra7onal:	The	hidden	forces	that	shape	our	decisions.		New	
York:	Harper	Colins	Publishers.	
•  Chabris,	C	and	Daniel	Simons,		2010.	The	Invisible	Gorilla:	How	Our	Intui7ons	Deceive	Us.		
New	York:	Crown	Publishers.	
•  Kahneman,	D.	(2011)	Thinking	Fast	and	Slow.	New	York:	Farrar,	Straus	&	Giroux	
•  Kahneman,	D.	(2003).	Maps	of	Bounded	Ra7onality:	Psychology	for	Behavioral	Economics.	
The	American	Economic	Review,	93(5),	1449-1475.	
•  Golieb,	D.	A.,	Weiss,	T.,	&	Chapman,	G.	B.	(2007).	The	Format	in	which	Uncertainty	
Informa7on	is	Presented	Affects	Decision	Biases.	Psychological	Science,	18(3),	240-246.		
•  Levav,	J.,	&	Fitzsimons,	G.	J.	(2006).	When	Ques7ons	Change	Behavior:	the	Role	of	Ease	of	
Representa7on.	Psychological	Science,	17(3),	207-213.	
•  Ariely,	D.	&	Norton,	M.	I.	(2008).	How	ac7ons	create—not	just	reveal—	preferences.	Trends	in	
Cogni-ve	Sciences,	12(1),	13-16.	
•  Johansson,	P.,	Hall,	L.,	Sikström,	S.,	&	Olsson,	A.	(2005).	Failure	to	Detect	Mismatches	
Between	Inten7on	and	Outcome	in	a	Simple	Decision	Task.	Science,	310(5745),	116-119.	
•  New	Yorker		June	12,	2012,		Jonah	Lehrer		Why	Smart	People	Are	Stupid			
•  West	RF,	Meserve	RJ,	Stanovich	KE	,	Cogni7ve	sophis7ca7on	does	not	aRenuate	the	bias	
blind	spot.		J	Pers	Soc	Psychol.	2012	Sep;103(3):506-19.	
•  Milkman	K,	Clough,	D.,	&	Bazerman,	M.		(2008)		How	Can	Decision	Making	Be	Improved?		
Harvard	Business	School.	
45
46
Post your webinar questions on Twitter @XBOSoft
Registrants will receive an email with information on where to view the recording
and slides from today’s webinar.
Join us to keep updated on all our webinars, reports and white papers:
facebook.com/xbosoft
+xbosoft
linkedin.com/company/xbosoft
Check out our blog: http://xbosoft.com/software-quality-blog/
Download our free white papers: http://xbosoft.com/resources/
Email us with ideas for future webinars or questions regarding our services!
services@xbosoft.com
Thank you!
Q+A

XBOSoft webinar - How Did I Miss That Bug - Cognitive Biases in Software Testing