User Experience of Machine Learning
Leslie	A.	McFarlin	
Sr.	UX	Architect,	EDP	Value	Stream
Introduction
•  The	EDP	UX	Team	conducted	a	literature	review	of	Machine	
Learning	(ML)	research.	
•  Literature	reviews	provide	a	history	of	a	topic,	including	struggles	and	
opportunities	for	use.		
•  Goal:	Be	prepared	to	design	and	deliver	an	outstanding	
user	experience	around	applications	incorporating	machine	
learning.	
•  Leverage	existing	guidelines.	
•  Create	our	own	guidelines	where	appropriate.	
•  Enable	CA	to	become	leaders	in	crafting	quality	ML	user	experiences.
Introduction
•  Two	topics	emerged	from	the	literature	review:	
USER	ISSUES	
Watch	list	for	users	interacting	with	Machine	Learning	applications.	
DESIGN	ISSUES	
Considerations	when	designing	Machine	Learning	applications.
Introduction
•  Three	opportunities	for	user	research	emerged.	
END	USER	AWARENESS	OF	MACHINE	LEARNING	
What	users	know	about	Machine	Learning,	including	how	it	is	used	and	what	
expectations	they	might	have	for	it.	
APPROPRIATE	LEVELS	OF	TRANSPARENCY	
What	information	to	surface	about	a	Machine	Learning	application	and	when.	
ASSESSING	TRUST	IN	ML	PRODUCTS	
Developing	or	incorporating	metrics	to	monitor	trust	in	our	Machine	Learning	
products.
Introduction to Machine Learning
Understanding	What	Is	Machine	Learning
What is Machine Learning?
•  An	algorithm	enabling	computers	to	mimic	human	
tasks.	
	
•  ‘Learning’	in	machine	learning	refers	to	the	
algorithm’s	ability	to	improve	performance	in	its	
designated	task	over	time.	
•  As	algorithms	improve	performance,	people	can	
step	away	to	focus	on	other	issues.
About Algorithms
•  Rules	describing	a	sequence	of	steps	meant	to	resolve	
a	class	of	problems.	
	
•  Perform	calculation,	data	processing,	and	automated	
reasoning.	
•  Machine	Learning	algorithms	fall	into	1	of	3	algorithm	
categories.	
•  Supervised	learning	
•  Unsupervised	learning	
•  Reinforcement	learning
Supervised Learning
Supervised Learning
•  Using	a	set	of	predictor	variables,	the	algorithm	
produces	a	prediction.	
•  Supervised	learning	algorithms	receive	training	
until	they	reach	an	acceptable	level	of	accuracy.	
•  Example:	Identifying	objects	in	pictures	as	human.
Unsupervised Learning
Unsupervised Learning
•  Based	upon	an	input’s	characteristics,	it	is	grouped	
into	a	category.	
•  Accuracy	is	not	evaluated	as	in	supervised	learning.	
•  Occasionally	used	in	conjunction	with	supervised	
learning	algorithms.	
•  Example:	Detecting	anomalous	behavior	in	a	
system.
Reinforcement Learning
Reinforcement Learning
•  Features	in	an	environment	are	used	to	make	
decisions.	
•  Past	experience	and	partial	feedback	help	the	
algorithm	continually	improve	the	quality	of	its	
decision	making.	
•  Example:	Automated	control	over	a	system	
(videogame	AI,	for	example).
Algorithms and User Experience
•  Some	algorithms	execute	faster	than	others	based	
upon	their	structure	and	complexity.	
•  Factors	to	consider	from	a	UX	perspective:	
•  Amount	of	input.	
•  Type	of	input,	the	interactions	needed	to	provide	it,	and	
when	the	input	is	given.	
•  Detail	of	output.
User Issues in Machine
Learning
Emerging	Topics
General ML User Personas
•  Consumers,	users	who	need	to	interpret	ML	outputs.	
•  Susceptible	to	cognitive	biases	and	peripheral	processing	of	ML	outputs.	
•  Makers,	users	who	integrate	ML	into	problem-solving	strategies.	
•  Receive	opinions	from	consumers	and	experts,	and	must	filter	what	is	
a	genuine	concern	and	what	is	not.	
•  Work	to	find	new	opportunities.	
•  ML	Experts,	users	with	expertise	allowing	them	to	innovate	in	
ML.	
•  Design	choices	impact	which	cognitive	biases	manifest	among	consumers.	
Emerging	Topics
Common User Issues
•  Interacting	with	ML	applications	becomes	complicated	due	
to	an	interplay	of	technology	literacy,	cognitive	biases,	and	
openness	to	persuasion.	
•  Technology	literacy	refers	to	knowledge	about	a	specific	
technology.	
•  End	user	awareness-	early	user	research	opportunity.	
•  Data	literacy-	understanding	input	into	the	ML	application,	and	its	output.	
	
•  Cognitive	biases	are	misperceptions	related	to	heuristic	
processing	(faster,	less	in	depth	thinking).	
Emerging	Topics
Common User Issues
•  Openness	to	persuasion	relates	to	how	willingly	users	
accept	output	from	a	ML	application.	
•  Depends	on	how	people	process	information.	
•  Potential	for	user	research	relating	to	ML	application	transparency.	
•  Central	processing	is	very	thorough,	focused	on	the	content	of	
the	output.	
•  Peripheral	processing	is	not	thorough,	and	relies	on	things	like	
perceptions	of	who	or	what	is	delivering	output.	
•  Perception	of	the	ML	application	as	expert,	or	infallible,	can	cause	issues.	
•  May	rely	on	heuristic	processing	
Emerging	Topics
Common User Issues
•  Mental	model	stability	impacts	the	acceptance	of	ML	
applications.	
•  Stable	mental	models	mean	better	user	acceptance	of	ML	applications.	
•  Stability	of	mental	models	relies	on	a	mix	of	factors:	
•  Quality	of	ML	application	transparency.	
•  Fit	between	ML	application	performance	and	user	expectations.	
•  Technology	literacy.	
Emerging	Topics
Design Issues in Machine
Learning
Emerging	Topics
Experience Segments within ML
•  Onboarding	and	familiarizing.	
•  What	is	the	first	time	user	experience?	
•  What	is	the	risk	of	cold	start	experiences	with	CA	products?	
	
•  Evolution.	
•  User	transition	from	novice	to	expert.	
•  Changing	of	interactions	as	the	ML	application	becomes	smarter.	
•  Error	recovery.	
	
•  O_oarding.	
•  Transition	within	the	company,	as	when	an	ML	application	trained	
by	one	person	or	group	now	needs	to	interact	with	new	people.
Common Design Issues
•  Design	for	ML	applications	extends	beyond	interaction	and	
visual	design.	
•  Communication	design	becomes	key.	
•  Interaction	design	rolls	up	into	one	overarching	principle:	
	
•  Communication	design	focuses	on	explanation	fidelity,	a	
combination	of	explanation	accuracy	and	depth.	
Emerging	Topics	
Allow	the	user	to	focus	on	using	the	system	output,	not	training	the	system.
Interaction and Visual Design
•  Help	users	discover	unknowns.	
•  Show	users	what	they	have,	and	help	them	explore	and	understand	it.	
	
•  Assist	with	decision-making	processes.	
•  Allow	users	to	reflect	on	the	ML	application	output	they	receive.	
•  Clarity	of	language	and	visual	depictions.	
	
•  Support	users	when	uncertainty	arises.	
•  Understand	limitations	and	failures	so	they	cease	to	be	hindrances.	
•  Uncover	through	feedback,	predict	through	research	and	internal	
understanding	of	ML	applications.
Communication Design
•  Explanation	fidelity	changes	on	two	factors.	
•  Soundness,	faithfulness	of	an	explanation.	
•  Completeness,	breadth	and	depth	of	an	explanation.	
•  Oversimplification	may	impact	both	soundness	and	
completeness,	leading	to	incorrect	or	unstable	mental	model	
creation.	
•  Overcomplication	impacts	the	ability	to	process	and	understand	
information,	impeding	mental	model	creation.	
•  Both	lead	to	mistrust	of	the	ML	application.	
Emerging	Topics
End User Awareness
of Machine Learning
Research	Opportunities
End User Awareness
•  What	is	end	user	awareness?	
•  Knowledge	on	the	basics	of	a	technology,	which	may	extend	to	include	
knowledge	of	its	pervasiveness	and	expectations	for	its	use.	
	
•  Why	does	it	matter?	
•  Contributes	to	user	expectations.	
•  Impacts	the	quality	of	interactions	with	a	system.	
	
•  How	can	we	impact	it?	
•  Education	about	the	role	of	ML	in	a	system.	
•  Explanations	of	ML	algorithms.	
Research	Opportunities
User Education
•  Helps	users	understands	the	strengths,	opportunities,	and	
weaknesses	of	a	technology.	
•  Reduces	the	impact	of	cognitive	biases.	
•  Contributes	to	the	creation	of	realistic	user	expectations.	
•  Helps	users	understand	why	they	receive	outputs	like	they	
do,	as	well	as	how	to	train	the	ML	application.	
•  Links	directly	to	transparency.	
Research	Opportunities
Research Opportunity
•  Assess	familiarity	with	ML	applications.	
•  Gauge	how	customers	think	ML	functions.	
•  Determine	whether	customers	see	a	fit	for	ML	in	their	daily	
job	functions.	
•  Explore	where	customers	think	CA	could	offer	ML	to	
improve	our	fit	in	their	work	ecosystem.	
Research	Opportunities
Research Outcome
•  Gauge	the	level	of	user	education	necessary	to	ensure	ML	
applications	from	CA	are	successful	within	the	marketplace.	
•  Correct	misperceptions	or	incorrect	assumptions.	
•  Understand	how	users	would	expect	to	interact	with	CA	ML	
applications,	and	work	toward	meeting	and	exceeding	
expectations.	
•  Gather	insight	into	the	expected	tone	of	interactions.	
•  Find	new	opportunities	to	insert	ML	into	our	products	and	
services,	or	to	create	new	ML-based	products	or	services.	
Research	Opportunities
Appropriate Levels of
Transparency
Research	Opportunities
Transparency
•  Provides	insight	into	how	a	system	functions,	and	why.	
	
•  Manifests	in	3	ways:	
•  Instructional	text.	
•  Feedback	from	ML	application.	
•  Help	documentation.	
	
•  Appropriate	levels	of	transparency	offer	multiple	benefits:	
•  Contribute	to	mental	model	stability.	
•  Build	trust.	
Research	Opportunities
Research Opportunity
•  Main	Focus:	Assessing	and	maintaining	appropriate	level	of	ML	
application	transparency.	
	
•  Transparency	ties	directly	to	user	expectations	for	a	system.	
•  When	expectations	are	met,	transparency	improves	trust	in	the	ML	
applications	and	improves	the	user’s	perception	of	their	role	and	confidence	in	
interacting	with	a	system.	
•  When	expectations	are	not	met,	transparency	does	little	to	change	the	
situation.	
•  Be	mindful	of	complexity,	and	measure	perceived	complexity	regularly.	
•  Too	much	complexity	impedes	understanding	and	building	of	trust.	
•  Too	little	(oversimplification)	can	also	impede	understanding.	
Research	Opportunities
Research Outcome
•  Develop	and	refine	a	set	of	standards	for	communication	
between	our	ML	applications	and	users.	
•  Improve	learnability	and	usability	of	our	ML	applications.	
•  ML	is	meant	to	take	the	burden	off	of	the	users,	and	with	
appropriate	levels	of	transparency	users	can	focus	more	on	
protecting	data	rather	than	finding	it	and	verifying	that	the	findings	
are	accurate.	
•  Open	up	opportunities	for	user	collaboration.	
•  ML	applications	benefit	from	better	collaboration	between	the	
experts,	strategists,	and	users	involved	with	them.
Measuring Trust in Machine
Learning Applications
Research	Opportunities
Trust
•  What	is	trust?	
•  Believing	an	entity	will	act	according	to	an	established	pattern,	
usually	with	the	assumption	that	the	acts	will	have	a	beneficial	
outcome.	
	
•  What	role	does	trust	play	in	human-computer	interaction?	
•  Persistent	product	use	(exclusive	reliance	on	product	for	tasks).	
•  Advocating	product	use	among	peers.	
	
•  Building	trust	in	human-computer	interactions:	
•  Consistent	screen	interactions.	
•  Well-defined	visual	cues.	
•  Clear	and	relevant	messaging.	
Research	Opportunities
Trust for ML Applications
•  ML	applications	must	build	trust	on	two	fronts:	
•  In	its	model.	
•  In	the	predictions	made.	
	
•  What	does	trusting	a	ML	model	mean?	
•  Believing	that	a	ML	application	will	behave	in	a	way	that	end	users	
find	acceptable.	
	
•  Instilling	trust	in	the	ML	model:	
•  Educate	users	on	the	strengths	and	limitations	of	the	ML	model.	
•  Follow	user	centered	design	principles	to	reduce	uncertainty	in	behavioral	
expectations.	
Research	Opportunities
Trust for ML Applications [Continued]
•  What	does	trusting	ML	predictions	mean?	
•  Users	acting	on	predictions	in	a	recommended	way.	
	
•  Instilling	trust	in	ML	predictions:	
•  Transparent	communications	around	the	prediction	made.	
•  Create	opportunities	for	users	to	verify	prediction	accuracy,	and	offer	
feedback	when	possible.	
•  ML	models	precede	ML	predictions,	so	if	users	do	not	trust	the	
model,	trust	in	the	predictions	may	suffer.	
Research	Opportunities
Research Opportunity
•  Main	Focus:	Assess	a	baseline	of	trust,	and	measure	
changes	over	time.	
	
•  Trust	in	ML	applications	builds	through:	
•  Reliable	functionality.	
•  Providing	clear	value	to	users.	
•  Enabling	users	to	interact	with	the	ML	application	around	outputs.	
•  Trust	changes	as	the	human-machine	relationship	evolves.	
•  How	much	trust	exists	at	the	start	of	ML	application	use,	and	what	is	the	
magnitude	and	direction	of	the	change	over	time?	
Research	Opportunities
Research Outcomes
•  Monitor	changes	in	trust	as	our	ML	applications	change.	
•  Understand	how	user	behaviors	and	expectations	change	
with	trust	levels.	
•  Low	trust	makes	users	hesitant	to	rely	on	ML	application	output,	
but	high	trust	can	encourage	overreliance	on	the	ML	application.	
•  At	what	point	do	users	start	second	guessing	their	abilities	and	
deferring	to	the	ML	application?	
•  Gain	insight	into	how	trust	levels	interact	with	other	user	
characteristics,	such	as	the	biases	one	holds.	
Research	Opportunities
Tying It All Together
Linking	Emerging	Issues	and	Research	Opportunities
Structure of Concepts
Research	Opportunities	
Trust in the Machine
Learning Model
Trust in Machine
Learning Predictions
Transparency
Communication DesignInteraction DesignVisual Design
User Awareness
User Expectations
User Education
Establishing a Research-Based
Strategy
•  Explore	what	users	know	about	the	technology	to	understand	
the	extent	of	their	experiences,	acceptance	of	it,	and	what	they	
expect	from	it.	
•  Design	informative	communications	from	this	knowledge	to	educate	users	
within	and	outside	of	the	product.		
•  Monitor	communication	quality	via	user	research.	
•  Like	other	design	areas,	communication	design	is	an	iterative	process,	so	
messaging	should	evolve	based	on	feedback.	
•  Follow	established	user	centered	design	principles.	
•  Test	designs	before	creating	a	final	version.	
Research	Opportunities
Establishing a Research-Based
Strategy
•  Research	measures	of	trust	in	human-computer	interaction,	and	
evaluate	whether	using	an	existing	measure	or	creating	a	custom	
one	is	best.	
•  Monitor	trust	over	time.	
•  Determine	if	trust	can	be	correlated	to	metrics	already	in	use	by	CA.	
•  Future	research	goal:	Plan	to	explore	how	trust	relates	to	user	
biases.	
•  Do	more	conservative	criteria	for	ML-based	judgments	lead	to	greater	
trust	levels	for	certain	types	of	data?	
•  Are	these	biases	a	user	characteristic,	or	an	industry	characteristic?	
	
Research	Opportunities
Thank you!
For	any	questions	or	comments	related	to	this	presentation,	please	
contact	Leslie	A.	McFarlin,	Senior	UX	Architect-	EDP	Value	Stream	
	
leslie.mcfarlin@ca.com

User Experience of Machine Learning