SlideShare a Scribd company logo
1 of 196
Download to read offline
AI for Information Architects
User Experience, Content Strategy and
Design
1
Marianne Sweeny
IA Conference 2019 Workshop
2
During our day
together we will
examine
What constitutes intelligence
Consciousness as part of the human
experience
Differences between Machine Learning and
Artificial Intelligence
Where did AI come from?
AI issues: safety, ethics and privacy
The intersections of AI, IA, user experience
and content strategy
How to use this knowledge to design for
humans and machines
Link to Workshop PDF
Tiny URl
Instructions for Living a Life
Pay attention
Be astonished
Tell about it
4
Mary Oliver
Foresight and Hindsight
All technological change is a trade-off
The advantages and disadvantages of new
technologies are never distributed evenly among the
population
Embedded in every technology there is a powerful
idea, sometime two or three ideas
Technological change is not additive; it is ecological
Media tend to become mythic (Computationalism)
5
Why Is This Important?
Because they are developing an GUI for AI
that you will be able to use to build artificially
intelligent interactions 6
Why is This Important? (2)
Because, unlike the Manhattan Project,
there is no governance over who is
doing what.
7
Why is this Important? (3)
Because AI is not infallible yet the
consequences are forever
8
Why is This Important (4)
Because it is our job as information architects,
user experience professionals, content
strategists, human factors professional
9
Why is This Important (5)
Quantum Around the Corner
Google said it had already devised machine-learning
algorithms that work inside the quantum computer,
which is made by D-Wave Systems of Burnaby,
British Columbia. …The most effective methods for
using quantum computation, Google said, involved
combining the advanced machines with its clouds
of traditional computers.
INTELLIGENCE
11
Mai Quality of Information
Information (Intelligence) is part of a spectrum
Data >> Information >> Knowledge
Information quality depends on individual
characteristics
– Contextual
– Situational
– Environmental
– Emotional
Machines use captured personal data
Reasoning
Deductive
Theory
Hypothesis
Observation
Confirmation
Inductive
Theory
Hypothesis
Pattern
Observation
Biological System Levels of Reasoning
Computable outcome (goal)
Steps/instructions to realize outcome (algorithm)
Implementation of program (realization of goal)
14
Artificial Meaning
“Context has always been part of expression because
expression become meaningless if context becomes
arbitrary…meaning is only ever meaning(ful) in
context.
…
Any gadget, even a big one like Singularity, gets
boring after awhile. But a deepening of meaning is
the most intense potential kind of adventure available
to us.”
15
Artificial Context
Context becomes what the system can measure
• Environmental features
• Interactions
• Ubiquitous computing
• Internet of things (IoT)
Non-methodical approach that brings in containment
(social through local) interactions
• Adaptive/reactive interaction in situ
• Context as perceived and used by actor
16
Information Cascade
A group of agents behaving rationally can fall prey to
infinite misinformation
• US Vaccination controversy
Information Cascade: when rational theory is based
on filter bubbles, hive mind
Cascade is caused by a misinterpretation of what
others think based on external observation of their
actions
More concerned with judgement fitting existing
consensus than the visible facts
17
Intelligence Explosion
Human-level AI will lead to super human AI
• Uncontrolled intelligence explosion without
human-level intentionality that is the result of
consciousness
• Program self-improves to state that exceeds ability
for outside control
Intelligence here measured by ability to attain
goal in most efficient manner
18
Information Explosion Components
Components
• Increased computational resources
• Duplicability
• Editability
• Goal Coordination
Accelerators
• Hardware capacity
• Better algorithms
• Massive datasets
• Psychology and neuroscience applications
• Accelerated science (quantum computing)
• Economic incentives (labor $ reduction)
19
Computationalism
World can be understood by computational processes
with humans as sub processes
1st Flavor: logical positivism
2nd Flavor: computer program with features related
to self representation and circular references similar to
that of a person
3rd Flavor: information structure that can be
perceived by some real human to also be a person
(Turing Test)
20
Solutionism
Silicon Valley assumption of a quantifiable self that is the
truer self
There’s an app for everything
False notion that Internet is a coherent and stable
influence in our lives
Grasping easy digital solutions often ignores complex
causes behind
Sometimes right algorithms can lead to wrong answers
21
Subjectivism
John Searle: AI not possible in any way because
consciousness is a physical property of the brain that
produces a subjective experience
Thomas Nagal: computers do not have subjectivity
(private landscape with personal experience). Cannot
create
Subjective Reality
• Intangible way to intelligence
• Philosophical concept focused on sense of self and
components (experience, perspective, belief, emotion,
consciousness)
• Composed of understanding and intentionality
Introspection is key
22
23
Consciousness
Gelernter: Tides of Mind
Humans have a knowledge of core concepts related to
the physical world = consciousness
Consciousness allows for building more robust mental
models that enable inference and prediction
Key question going unanswered: What is the human
mind without the human being?
The mind is consciousness (objects & events) plus
memory (occurred outside of the mind)
Thinking has intuitive meaning tied to consciousness
• Perception
• Recollection
• Idea
24
Gelernter on Consciousness
“Conscious experiences range from vivid color
sensations to experience of the faintest
background aromas; from hard-edged pains to
the elusive experience of thoughts on the tip of
one’s tongue. . . . All these have a distinct
experienced quality. . . . To put it another way,
we can say that a mental state is conscious if it
has a qualitative feel—an associated quality of
experience…”
25
Consciousness Spectrums
Up-spectrum
• Live in the present
• Outer consciousness: external world (bodies)
• Feeds memory up
• Thinking is focused, disciplined systematic
Down-spectrum
• Recall, revisit, reoccupy the past
• Dreams are re-experiencing memories in the form of
thought
• Recollections, ideas
26
Magical Thinking
Magical thinking = things only imagined become real
Cannot “learn” to be creative
Creativity is repurposing in a way that software cannot
because it involves:
• Ignoring limits
• Curiosity
Inspire but not force creative insight
“Where the confines of the waking world blend with
those of dreams.” Edgar Allen Poe
27
Dreams
“Dreams tell us truths that we know but are not brave
enough to acknowledge.”
Remembering out of control
Dreams = emotions + hallucinations
28
29
For too long, emotionhas been cognitive researchers’
third rail. In research on humans, emotions were
deemed irrelevant,impossible to study or beneath
scientific notice…But nothing could be more essential to
understanding how people and animals behave.
Sy Montgomery, NYT Mar 3 2019
Emotion
Primary emotions
• Interest
• Pleasure
• Distress
Secondary emotions
• Anger
• Fear
• Disgust
• Happiness
• Sadness
• Surprise
Emotional Resonance: ability to feel/echo someone
else’s feeling
• Empathy
• Sympathy
• Essential to the human experience
30
Emotions (2)
Play key role in decision-making, creativity and
intelligence (EQ)
Sentic Modulation
• Facial Expression
• Voice: utterance, timing, pitch
Learning is the quintessential learning experience
31
Affective (emotional) Computing
Assumption of small set of emotions to make
programming easier
Assumes binary nature of emotions (cannot be angry and
pleased)
Conversational signals
• Syntactic displays
• Speaker Displays
• Listener Response displays
Emotionally-basedcomputers
• Same emotional ability similar to a dog, neither personal or friendly
• Computer voice with intonation and natural expression
• Computer perceives emotional state and responds appropriately
• Maximized sentic communication between human and computer,
personal and “user-friendly”
32
MACHINE LEARNING
Capacity for system to improve
performance through experience, not
explicit instructions
33
Machine Learning: Programs that act like humans
34
Machine Learning
A programming approach to problem-solving– composite
of not a single algorithm
Model of real world using mathematic structure with
decision-makingrules
Derives rules from a data set
Objective function = desiredoutcome
Training set with adjusted parameters until goal achieved
Test set used to validate accuracy and effectiveness
Machine completes an objective without specific instructions
35
36
Unsupervised Learning
Unlabeled data
Clustering
Segmentation
Association
Algorithms
• Neural networks
• Independent component analysis
37
Supervised Learning
Uses document-class pairs to indicate proper classes for
given documents
Used human specialists for classification of “training set”
used to “teach” system
• Assigns classes to documents
• Reviews machine classification performance
6 Algorithm types
• Decision Trees
• Nearest neighbor
• Relevance Feedback
• Naïve Bayes
• Support Vector Machine
• Ensemble
38
Probabilistic Machine Learning
Probabilistic framework can represent and manipulate
uncertainty
Requires high capacity for flexibility to allow data to
“speak for itself”
Universal inference engine using Monte Carlo
39
Reinforcement Learning
Program learns reward from human feedback then
optimizes reward function
• Rewards
o Sampled
o Evaluative
o Sequential
• Optimized reward function
• Reward must be explicit to avoid being “gamed”
Issues with tasks and goals
• Too complex
• Hard to specific
• Poorly defined
40
Transfer Learning (CS)
Reason relationally
Requires conceptual representation produced by
abstract structural knowledge (that is where we humans
come in)
Generalizations are transferred to environments that
share structures, e.g. mental models
41
ARTIFICIAL INTELLIGENCE
Solve problems that the mind can solve using
derived intelligence instead of a prearranged set of
rules
42
Generalized Intelligence
Spearman coefficient to measure intelligence,
correlation measure, if/then
G Factor: general level of intelligence possessed by an
individual
Quantified intelligence represented by a number
Used to rank people by IQ
43
44
Neuroscience of Algorithms
Deep Learning
• Distributed interactions
• Tuned by learning procedures
• Stochastic (random) parallel
information processing
Convoluted neural
networks
• Convergent and divergent
information flows
• Non linear transduction
• Maximum-based pooling of
inputs
45
Semantic Computing
Segmented and match instruction
Associations to understand human behavior and
predict actions
Requires semantic matching
• Control layer (input)
• Semantic mapping layer (ontology)
• Device Layer
Requires user and behavior models (persona)
Semantic reasoning module confirms user intentions
46
Deep Learning
47
Deep Learning Components
Collection of trainable math units which collaborate to
compute complicated functions
HUGE raw data training set
Results get better with more data, new/better algorithms
based on observation and insight
Requirements
• Scalable
• Portable
• Reproducible
• Extensible
• Powerful processing hardware
48
AI Types
Logical Reasoning
Knowledge Representation
Planning and Navigation
Natural Language Processing
Perception
49
Embodied Agents
Internet of Things
Goal driven planning
Reactive agents
Search
50
What AI Best Suited To
Search
Learning Systems
Pattern Recognition
Planning
Induction
51
Search
Requires additional structure
Near to | Close to expansion
Solve for one, Solve for many
Personalization
52
Learning Systems
Use past behavior to predict future action using
human planned heuristic methods
A reinforced learning model that leads to a secondary
reinforcement model that is more autonomous
• Reinforcement is reward
• Extinction is unlearning
Grade on curve of computer’s acquired capability
53
Pattern Recognition
Ability for computer to act intelligently based on input
data with a lot of variability
• Decision Trees
• Nearest neighbor classification
• Neural Networks
Classification
Ideal replaced by practical
54
Planning & Problem-Solving
Large assembly of interrelated sub-problems
Given a start state and desired outcome state
Choose appropriate sub-problems for solving selected
problem
Success is most efficient set of actions to achieve
desired outcome
55
Induction
Learning by example
Derive the rule from set of observed instances
Classification key component
• A learning system has to be capable of evolving its own
class descriptions
• The task of constructing class definitions is called
induction or concept learning
56
AI Models
Base Models
• Learning
• Prediction: create actions to respond to learning
Sub modules
• Data analysis
• User Identification
• Behavior recognition
• Service construction and provisioning
57
Thought Vectors
Gregory Hinton – Google Research Fellow
Encode thoughts as sequences of numbers (vector)
Software learns to recognize patterns in these digital
representations
“If you take the vector for Paris and subtract
the vector for France and add Italy, you get
Rome,” he said. “It’s quite remarkable.”
Geoffrey Hinton
58
Google BERT
AI to carry on decent conversation (Turing test)
Learn general vagaries of language and apply to
specific task
Analyzed millions of sentences
• Self-published literature
• Entire Wikipedia
Goal to predict next word and understand the
fundamental relationships between words
59
Bio directional Encoder Representations and Tranformer Language
Singularity
State when humans and machine merge
Concept introduced by Ray Kurzweil, Google fellow
6 Epochs of Evolution
• Physics & Chemistry
• Biology
• Brains
• Technology
• Merger of Technology & Human
Intelligence
• Universe Integration
60
Super Intelligence
Traits
• Capacity to learn
• Capacity to deal with uncertainty
• Ability to extract concepts from data and internal state
• Ability to leverage acquired concepts for combinatorial
representations for logical & Intuitive reasoning
• Capacity for unrestrained self-improvement (overwrite its own
code)
Types
• Speed (faster than human mind)
• Quality (faster and much smarter than human)
• Collective (aggregates performance of lesser intelligences)
External governance: None
61
And We Go Boldly Into the Whirling Knives*
AI might achieve a strategic advantage
Orthogonality Thesis: cannot assume that AI would be
able to share our biological values
• Culture
• Kindness
• Spiritual enlightenment
Instrumental Convergence Thesis: cannot assume that
Super AI would be satisfied with a supportive or
subservient role
Super AI could develop a final goal that is not
anthropomorphic
62
Existential Risks
Whirling Knives (2)
Perverse Instantiation: satisfy goal in a way that
violates programmed intent
Infrastructure profusion: over consumes resources to
achieve more reward
Mind crime: AI creates processes with moral states
(sentient simulations)
63
Practical Risks
64
Break
https://affinelayer.com/pixsrv/
ORIGINS OF AI
66
Turing (1948)
Imitation Game
Reward Signal increases probability of repetition events
leading to it
Suitable imperative: one that regulates the order in
which the rules of the logical system are applied
Program the learning
Objections to machine intelligence
• Theological
• Head in Sand
• Mathematical/philosophical
• Consciousness
• Lady Lovelace
67
Dartmouth Summer Research Project (1956)
Top AI scientists proposed
concentrated effort for AI
Focused on
• Computer use of language
• Neuron Nets
• Self improvement
• Abstraction
• Randomness and creativity
Follow up conference held in
2006
68
Human Factors professionals NOT INVITED
Perceptron (1957)
Developed by Frank Rosenblatt, psychologist at Cornell
Artificial neural networks (ANN) = many interconnected
processing units for parallel processing
Trained not programmed
1. Input addition
2. Comparison with threshold value
3. If threshold met or surpassed, output activation
Modifiable connections adjusted according to “learning”
algorithm
Perceptrons are not without limitations
69
Pandemonium (1959)
Decision making entity involving 4 “demons”
Top Layer decides what information has been presented
to the system (discernment)
Feed-forward and feedback connections between layers
Requirements:
• Well defined problem
• Unbiased decision making
• Single tamper-proof labeling of behavior
70
Pandemonium Feed Forward Layers
Bottom layer: store data
3rd Layer: select, weigh, filter and pass along data
2nd Layer: “cognitive demons” decide which
information from 3rd layer to process
Decision layer: single decision demon on what
information is presented to the system for
processing
71
Marvin Minsky (1960)
Cognitive computer scientist
Co-founder MIT AI Laboratory
Symbolic AI
With Seymour Papert brought forth a 20 year “AI
winter” with criticism of early AI Artificial Neural
Network (ANN) approach
72
Minsky on What AI Best Suited To
Search
Learning Systems
Pattern Recognition
Planning
Induction
73
Minsky on Creativity
“There’s no such thing as “creativity” in the first place. I
don’t believe there’s any substantial difference
between ordinary thought and creative thought…I’ll
argue that this is really not a matter of what’s in the
mind of the artist – but what’s in the mind of the
critic…”
74
ELIZA (1963)
1st Instance of human mediated chatbot
Early computer/human conversation (NLP)
Heuristic programming
• Keyword identified (input)
• Sentence transformed according to rule associated with
identified keyword
• Choose appropriate transformation – if none available,
choose most likely/earlier transformation
• Generate responses
Keyword dictionary contains composition, assembly
and decomposition rules
75
Eliza and Global Context
Global Context is key to understanding
Sub-contexts emerge as conversation continues for
consequential richness
Individual participants bring their own belief
structure
ELIZA scripts (previous learnings) establish a global
context for future “understandings”
Broad context framework only
76
Dreyfus (1964)
Create computer systems with the intelligence and
reasoning of an human adult
Rationalist assumption of “ordered reality” is flawed
Knowledgeable reality itself lacks rational structure
Inter-relatedness between humans and the world
Human world filled with experience structures- neither
subjective or objective
AI discovers meaningful structures to apply to
meaningful behaviors, independent of fixed rules
77
Norvig & Russell (2004)
Systems that act like humans (Turing)
System that think rationally (logic solvers)
Systems that act rationally (perception, NLP,
Planning, Navigation)
Systems that think like humans (neural)
78
Sloman Minsky Model (2000)
79
St Thomas Symposium (2004)
Need synthesis of methodologies
Move from reactive to deliberative thinking
Include affective concepts like emotions
• Primary
• Secondary
• Tertiary
Incorporate “common sense” thinking
Source of human resourcefulness and robustness
80
Human Factors professionals NOT INVITED
Asilomar Conference 2009
81
Human Factors professionals NOT INVITED
Asilomar AI Research Needs: HCI
82
Human Factors professionals NOT INVITED
Stanford 100 Year AI Study (2016)
Long term reoccurring study of AI influence on people
and society
Modeled after Association for Advancement of Artificial
Intelligence (AAIA) consortium 2008
4 intended audiences
• General public
• Industry
• Government
• AI Researchers
83
Human Factors professionals NOT INVITED
5 Dominant Tribes of AI (2017)
Symbolists: logical reasoning
Connectionists: structures inspired by human brain
Evolutionaries: methods inspired by Darwin theory of
evolution
Bayesians: probabilistic inferences (google and others)
Analogizers: extrapolate from previously seen
examples
84
Two Schools of AI
Symbol Processing
Neural nets
85
Symbolic AI
Intelligence = symbol manipulation
Fixed and formal rules
Assume: all intelligent processes are forms of
information processing
Computer processes symbolic representations (1s/0s)
according to formal rules (program)
Plato’s rationalism
GOFAI
86
Artificial Neural Networks
Connectionism
Neural networks made up of input layer, interstitial
layers and output layer
Good at: pattern recognition, categorization, and
behavior coordination
Knowledge comes from the connections not symbol
interpretation
Past experience used to form intelligence in current
state
Heideggerian AI
87
88
Learning Systems
Use past behavior to predict future action using
human planned heuristic methods
Reinforced learning model that leads to a secondary
reinforcement model that is more autonomous
• Reinforcement is reward
• Extinction is unlearning
Grade on curve of computer’s acquired capability
89
Pattern Recognition
Ability for computer to act intelligently based on input
data with a lot of variability
• Decision Trees
• Nearest neighbor classification
• Neural Networks
Classification
Ideal replaced by practical
Constant decision what problem to work on
• Value based
Pandemonium
90
Planning & Problem-Solving
Large assembly of interrelated sub-problems
Choose appropriate sub-problems for solving selected
problem
Logic Theory: prove theorem using heuristics:
• Similarity test
• Simplicity test
• Strong non-provability test
Heuristic programming
91
Heuristic Programming
Early training for AI
Self-learning
• Substitutes machine learning for logic algorithms
• Ranks alternative in a branching decision trea
Achieves an approximate of the exact solution
ELIZA
92
Optimal Stopping
Computer science problem
Stop too early and you miss a good candidate
Stop too late and you miss a good candidate waiting
for perfection that doesn’t exist
Threshold rule: establish a optimal stopping point and
take the first candidate above that percentile
Establish a “period of no decision” – predetermined
amount of time for looking then a leap phase of
commit
93
Explore | Exploit Tradeoff
Explore: gathering information
Exploit using the information gathered to produce a
good result
Value of explore declines over time
Value of exploit increases over time
Exploration has inherent value of finding the best
candidate
“To live in a restless world requires a certain restless in
oneself…you must never fully cease exploring.” p.54
94
The Principle of Beneficence
Philosophic concept tied to ethics
Condition of “do no harm” in medicine
Possible harm to few in order to benefit many
Philippa Foot and moral dilemmas (train switch
scenario)
Who decides the winners and losers of AI?
LUNCH BREAK 96
AI ISSUES
97
The most exciting phrase to hear in
science, the one that heralds new
discoveries, is not ‘Eureka’ but “that’s
funny…” Isaac Asimov
Learning is one Thing…Thinking
Another
“In designing software and microprocessors, I
have never had the feeling that I was designing
an intelligent machine. The software and
hardware is so fragile and the capabilities of the
machine to “think” so clearly absent that even
as a possibility, this has always seemed very far
in the future…My personal experience suggest
we tend to over estimate our design abilities.”
98
Sometimes They Learn the Wrong
Things
99
Sometimes They Get Things Wrong
100
Sometimes They Do the Wrong Thing
101
Sometimes They Build the Wrong Things?
Built as a proof of concept
for AI gone wrong with
biased data
MIT AI Lab
Dataset was a sub-reddit
dedicated to document the
“disturbing reality of death.”
102
103
104
Privacy
User Metrics Training Data
Frequency of access
Click-through (selection from results set)
Time on site
Pages per session
Bounce Rate
Conversion (fulfilled information need)
Profile data
105
Implicit Collection
Implicit (max precision 58%)
• Software agents
• Logins
• Enhanced proxy servers
• Cookies
• Session IDs
Gathered without user awareness from behavior
• Query context inferred
• Profile inferred
• Less accurate
• Requires a lot of data
106
Explicit Collection
Explicit (max precision 63%)
• HTML forms
• Explicit user feedback interaction (early Google
personalization with More Like This)
Provided by user with knowledge
More accurate as user shares more about query intent
and interests
107
What Constitutes a User Profile
Information types
• Demographic
• Interests (short & long-term)
• Preferences
Profiles are dynamic and iterate over time
Represented as
• Set of weighted keyword
• Weighted concepts
• Semantic network
108
Google on Privacy (2007)
“There was a small trade off on privacy but they’re
going to get dramatically better search results. That
was something that made sense to us over time.”
Melissa Mayer
VP User Experience
Google
109
Google on Privacy Now (2019)
https://twitter.com/jason_kint/status/11054840
10183188480 110
What Google Collects
Implicit
Use information
Device information
Log information
Unique application information
Local storage
Cookie data
Explicit
Location information
Profile information
111
What Google Collects
Profile information
Use information
Device information
Log information
Location information
Unique application information
Local storage
Cookie data
112
Methods
Client Side: gather data from user profile
Server Side: gather data from system usage (logs)
Group-ization: Recommender system with vested
interest
Member data used to rank the individual results
• Relevance weigh enhanced with more members of
group who “like” resource
• Sum of personalization scores of each group member
113
Google Personalization
Tracks
• What is selected
• Level of interaction
• What is not-done
(bounce rate)
Signals
• Location
• Search history
Less specific queries
benefit the most as they
require the additional
context provided by
personalization
114
Facebook Security Lapses
2009: User information made public without
permission
2014: manipulated news feed to see if the
system could assess mood
2018: Revealed Cambridge Analytica sold FB
data
115
Prediction Drawbacks
AI algorithms rely on past behavior to predict future
behavior
Programming and test set must define “normal” for the
system to detect “abnormal”
Cannot predictwhat has not already occurred
• Taleb’s black swans
• Flash Crash of 2009
Past behavior predictionignores present environment and
emotional influences
Must define normal to program for abnormal detection
116
Privacy Paradox
Privacy risk is weighed against value of object,
interaction, end result
• Research assumes user calculates an internalized value
• Basis for choice to reveal personal identification
information (PII)
Value is determined by the smoothness of the
interaction (Groupon, Amazon Local)
• Value proposition overrides security/privacy concerns
Higher level of user control over PII reduces the
perception of risk
117
Tim Cook on Privacy
Called on US to pass comprehensive data security act
along the lines of GDPR
4 guiding Principles
• Right to have personal data minimized
• Right to know what is being collected and why
• Right to data security
• Right to access
118
If you’re not paying for it, YOU are the product
119
AI Ethics
120
AI Ethics = TL2
121
Popular search engine returns 215,000,000
results for AI ethics
“Algorithms are opinions
embedded in code.”
Cathy O’Neill
Weapons of Math Destruction
(2016)
122
Algorithmic Bias
Technology inherits ideas and values of the group that
develops it
Algorithm development rests on emotional capitalism
• Emotional capitalism: feeling can be managed rationally and
governed by logic
• Emotional socialism: suffering is unavoidable and should be
tolerated
Accept decisions from an automated system as agnostic
3 types
• Implicit (absorbed automatically
• Accidental (introduced by ignorance
• Deliberate
123
Algorithmic Bias
Technology inherits ideas and values of the
group that develops it
Algorithm development rests on emotional
capitalism
Accept decisions from an automated system as
agnostic
124
Governance Issues
Explanation (transparency)
• Core components
• Local Explanation: explain for specific decision, not system as
a whole
• Counterfactual Faithfulness: expect the explanation to be
causal and can be provided without providing contents of
the system
• Provide in situations where a person would be required to
do so
Regulation
• Regulators don’t understand what they are regulating
• Risk of stifling innovation
Applications (consistency)
• Impact beyond decision-maker
• Know if AI behaving erroneously
125
Accountability Under Law
Explanation
• Core components
• Local Explanation: explain for specific decision, not
system as a whole
• Counterfactual Faithfulness: expect the explanation to be
causal and can be provided without providing contents
of the system
• Provide in situations where a person would be required
to do so
Regulation
• Regulators don’t understand what they are regulating
• Risk of stifling innovation
Consistency of Application
• Impact beyond decision-maker
• Know if AI behaving erroneously
126
Bias Remedies
Design thinking
HCI heuristics as well as performance benchmarks
HCI professionals testing prior to live site deployment
Diversity/bias audits
Accountability
127
128
Explainable AI (xAI)
xAI = field of research addressing interpretability and
explain-ability in ML and AI
• Compliance with relevant legislation
• Broader range of debugging
• Those working on system learn from it
• Enhanced trust in system decision-making (including scenarios where it
can break down
AI is a black box for those outside of computer science
AI development must shift from ad-hoc models for
decision-making that is more trustworthy
• Contrastive (present alternative data points)
• Counter-factual (changes in features that would lead to a different
outcome)
129
UK Parliament AI CoC
130
Application of a cross-sector code for the development
of AI applications
• Artificial intelligence should be developed for the common
good and benefit of humanity.
• Artificial intelligence should operate on principles of
intelligibilityand fairness.
• Artificial intelligence should not be used to diminish the data
rightsor privacy of individuals, families or communities.
• All citizens should have the right to be educated to enable
them to flourish mentally, emotionally and economically
alongside artificial intelligence.
• The autonomouspower to hurt, destroy or deceive human
beings should never be vested in artificial intelligence.
AI NOW Initiative (2018)
Kate Crawford (Microsoft) and
Meredith Whittaker(Google)
Founded to deal with issues of AI
diversity and inclusion
Conduct empirical studies
focused on
• Bias and inclusions
• Labor and automation
• Infrastructure and Safety
• Basic rights and liberties
131
AI Safety
132
Not Good AI
Used in for negative
outcomes
• Autonomous weapons
• Biased facial recognition
Used for malicious
purposes
• Fake news
• Denial of attack
133
Generative Adversarial Networks
Dueling neural networks
• 1 to generate an image from a data set
• 1 to determine if the image came from the data set
AI cop and counterfeiter game of cat and mouse
134
AI Risks
Mis-specified Objectives
Negative Side Effects that extend to wider application
Hacking: rewards, devices
Bad extrapolation of the real world
Poor training data
Privacy
Fairness
Abuse
Transparency
135
AI Risk Mitigations
Define impact regulator
• Future state
• Substitutes lower impact null actions
Train impact regulator
• Over many tasks
• Separate training parameters for task side effects
Penalize influence
• Use information-theoretic measures to capture agent’s
potential for information
• Penalize empowerment
Provide scalable oversight with multi-agent approach
136
AI Risk Mitigations 2
Use Objective functions to capture designer informal
intent
• No partially observed goals
• Concrete, not abstract rewards
• Deep correlation between tasks and functions
Feedback loops
• Model look ahead
• Reward capping
• Counter example resistance – combination of rewards
137
AI Risk Mitigations 3
Safe exploration
• Risk sensitive performance criteria
• Use demonstration
• Simulated exploration
Well defined models
• Train on multiple distributions
• Program for out-of-distribution situations
138
Break
https://affinelayer.com/pixsrv/
DESIGN FOR MACHINES & HUMANS
The real meets the artificial
140
Human-centered design has expanded from the
design of objects (industrial design) to the design of
experiences (encompassing interaction design, visual
design and the design of spaces). The next step will be
the design of system behavior; the design of
algorithms that determine the behavior of automated
intelligent systems
Harry West
CEO, Frog Design
141
Machines Users Are Different
Logic: exacting, context independent, conditional logic
Development: uses explicit rules to define possible
behaviors
• Heuristics
• Intuition derived from huge data sets
142
143
Information
Architecture
Information Architecture and AI
Problem definition and structure
Connections
Proto-typicality (mental models)
Visual complexity (rely on text more than images)
144
Form IA and AI Strategies
Customer Empathy Framework
• Define the problem
• Formulate the solution
• Map the environment (customer journey)
Tools
• Personas (use cases)
• Problem statements
• Environment description (include systems and
processes)
• Success benchmark success (quantitative, qualitative)
145
Create Meaningful Structures
Site Structure
• Machine readable text
• Related content model
• Schema markup
Internal linking to reinforce context relationships
and discovery
146
Structured Data
Name the
components on the
page for the
machine user
147
Navigation for AI
148
Name object for cross system compatibility
Move toward the center of project
Create deliverables that bridge the logical world of
IAs and the physical world of implementers
Converse with other disciplines in language they
understand and employ
149
User
Experience
How UX Professionals Defined UX?
A consequence of a user’s internal state,
(predispositions,expectation, needs,motivations,mood, etc.)
the characteristics of the designed system
(complexity,purpose,usability, functionality,etc.) and the
context (or the environment) within which the
interaction occurs (organization/social setting,
meaningfulnessof activity, voluntariness of use,etc.)
150
Key UX Data Points
Conversions
Unique Visitors
Bounce rate
Social Actions
Number of Pages/visited
Average time on page (exclude bounces)
Exit rate
151
Panda Algorithm Negative Signals
High % of deep content
Low amount of original content
High amount of ads or gratuitous images
Large quantity of boiler-plate text
Over-optimized (too many links)
High bounce rate
Low visit duration
Low CTR from Google search results
No/Low quality in-links
No/Low social mentions
152
Google Optimal Page Layout
153
Observed Self better than Quantified Self
154
Use a Different Pattern Library
Visitor search patterns: Use online tools to
uncover customer intent
Visitor behavior patterns: website analytics
Visitor conversion patterns: content to address
all stages of conversion funnel
Tools
• Search suggest scrapers
• SEO|Content Marketing software
• Webmaster and website analytics accounts
155
156
Content
Content & Context Algorithms
Hypertext (HITS) Induced Topic Search
Hilltop
Topic Sensitive PageRank
Orion (2008)
Hummingbird
157
HITS (1997)
Hypertext Induced Topic Search
HITS is a related algorithm for Authority determination
HITS = PageRank + Topic Distillation
Unlike PR, query dependent
Somewhat recursive
158
Authorities & Hubs
159
Hilltop Algorithm (2001)
Topic segmentation algorithm = query dependent
Introduces concept of non-affiliated “expert
documents” to HITS
Quality of links more important than quantity of links
Segmentation of corpus into broad topics
Selection of authority sources within these topic areas
160
Latent Semantic Indexing
Using a ~<search term>
will initiate Google’s LSI
and produce a list of
results that contains
your original term as
well as documents that
the search engine
determines are relevant
to your query.
161
Topic-Sensitive PageRank (2002)
Context sensitive relevance ranking based on a set of
“vectors” and not just incoming links
Pre-query calculation of factors based on subset of
corpus
Context of term use in document
Context of term use in history of queries
Context of term use by user submitting query
Based on 16 top-level Open Directory categories
162
Orion Algorithm (2008)
Purchased by Google in
April 2006 for A LOT
of money
Results include expanded
text extracts from the
websites
Integrates results from
related concepts into
query results
163
Hummingbird: Entity detection
Comparison of search query to general
population search behavior around query
terms
Revises query and submits both to search index
• Confidence score
• Relationship threshold
• Adjacent context
• Floating context
• Results a consolidation of both queries
164
AI Content Components
Traditional IR (tf*idf)
Link analysis for Authority
Location on page
Query type
Content Qualities
• Uniqueness
• Authoritative
• Freshness
• Well Written
165
Transform Keywords Into Intelligence
Keywords are user queries
Queries represent user information needs and
satisfaction threshold
Keywords become intelligence
• Competitive: who is doing better
• Visibility:how do the search engines see my content
• Customer: how do targeted customers look for my
products and Services
Tools
• Search suggest scrapers
• Google Trends
• SEO Software (BrightEdge, SEMrush)
166
Establish Context
Context becomes what the system can measure
• Environmental features
• Interactions
• Ubiquitous computing
• Internet of things (IoT)
• Digital Assistants
Non-methodical approach that brings in
containment (social through local) interactions
• Adaptive/reactive interaction in situ
• Context as perceived and used by actor
167
Create and Curate Content
Entities Rule
Newspaper model
Opening paragraphs most important for
subject determination
Relational content model
168
User interest mapped to customer
journey and content type
169
Keywords
Customer Journey Phase AVG CAT Rank _Monthly Searches Average of Competitor Rank
Consideration 32 73,200 #DIV/0!
Used 30 47,690 #DIV/0!
Sale 35 21,120 #DIV/0!
Product Information 29 2,990 #DIV/0!
Competitor 68 1,150 #DIV/0!
Quick Answer 32 110 #DIV/0!
Rental 6 50 #DIV/0!
Parts 56 40 #DIV/0!
Reviews 40 #DIV/0!
Competitor Rental 55 10 #DIV/0!
Purchase 39 2,980 #DIV/0!
Sale 42 2,690 #DIV/0!
Used 38 260 #DIV/0!
Product Information 14 30 #DIV/0!
Awareness 12 2,820 #DIV/0!
Used 17 1,640 #DIV/0!
Competitor 880 #DIV/0!
Product Information 9 250 #DIV/0!
Quick Answer 30 #DIV/0!
Sale 7 20 #DIV/0!
Post Purchase 30 30 #DIV/0!
Product Information 30 20 #DIV/0!
Used 10 #DIV/0!
Grand Total 32 79,030 #DIV/0!
Map Semantic Connections
Semantic technology requires everything to be
associated to understand user activity
• Control layer
• Mapping (semantic) layer
• Device layer
Semantic analysis model
• Semantic layering
• Semantic mapping (Boiko IAS 2018)
• Semantic machine heterogeneity
Association between user behavior patters (customer
journey map)
170
Give Users What They Want
171
# of pages in directory
# of pages views for
each directory
Exercise: Develop AI Application for Crisis Help
Line
1. Choose a model
I. Base Models
I. Learning
II. Prediction: create actions to respond to learning
II. And a sub-models
I. Data analysis
II. User Identification
III. Behavior recognition
IV. Service construction, Service Provisioning
2. Define objective function
3. Train system by adjusting parameter’s (reward) to
maximize objective function
4. Test to evaluate accuracy and effectiveness of the
model
172
173
Design
Data Driven Design
Without a person at (or near) the helm who
thoroughly understands the principles and elements
of Design, a company eventually runs out of reasons
for design decisions...
When a company is filled with engineers, it turns to
engineering to solve problems. Reduce each
decision to a simple logic problem. Remove all
subjectivity and just look at the data. Data in your
favor?...
And that data eventually becomes a crutch for every
decision, paralyzing the company and preventing it
from making any daring design decisions.
174
Generative Design
AKA Mutative Design, Parametric Design
Designer defines rules for algorithm
Algorithm generates variations using the predefined
rules
Algorithm filters the results based on design quality and
requirements
Designer chooses the best variants and polishes as
needed
System runs A|B tests for variant(s)
Test results are used to choose most effective design
175
Privacy By Design
Opt in / Opt out
User control over sharing – notifications, time limits
Command user attention for privacy decisions
176
Not Privacy by Design
177
Visual Complexity & Prototypicality
178
Google Page Layout
179
AI Design According Computer Science
Components
• Variables
• Domains (environment)
• Constraints (limits)
Goal of AI Design = satisfy constraints
Admissible heuristic: if it costs too much to reach
solution state then revise or reject
180
Design Thinking for Data Science 1
Reach out to the development staff
Embrace design thinking
Transform “my idea” into “our idea” with early stage
collaboration
181
Design Thinking for Data Science 2
Customer Empathy Stage
• Understand the problem solving
• Define the solution
• Map the environment (customer journey)
• Define the characteristics of a good solution (heuristics)
Outputs
• Personas (use cases)
• Problem statements
• Environment description (include systems and
processes)
• Benchmark success (quantitative, qualitative)
182
Design Thinking for Data Science 3
Go Broad, Go Deep Stage
Brainstorm solution ideas across silos
Diversify contributors
Post all artifacts and review as a group
Organize ideas into themes
Include “leap of faith” assumptions
Take the best and formulate a solution hypothesis
183
Design Thinking for Data Science 4
Rapid experimentation with Customers
Paper prototyping, sketches, storyboard
Build stable testing methodology into plan
Start small (project | testing) to achieve collective wins
184
Smart Tools & Platforms
Semantic image segmentation
Font recognition
Intelligent audience segmentation
185
Data Protection by Design
Design strategy for accountability
• Enforceable policy
• Demonstrated Compliance
Detect and address bias
Components
• High-level design goals
• Privacy enhancing technology (user controls)
• Sanitation of data
Principle of Accountability
Discrimination Aware Data Mining (DADM)
186
Privacy Design Recommendations
Different UI for different tasks
Opt in, not opt out
Build in alerts if system deviates from the norm
Clear explanation of system decision making methods
and reasoning workflow (xAI)
Government enforced standards of data collection
and control
187
188
Algorithm-Based Design 1
Designer as art director, algorithm as apprentice
Determine “well designed” site for learning model
Create mood board for algorithm to deconstruct
Use algorithm for simple tasks
• Color match up
• Image assembly (movie poster app)
• Styling videos
• Extract usage patters from data sets
189
Algorithm-Based Design 2
Designer and Developer define the logic to consider
content, context and user data
AEM (behavior targeted UI)
Brightedge DataMind
Vox Media Homepage Generator
190
Machine Learning Design Process
Define learning problem
• Inputs
• Outputs
• Types of training data needed
Generate good data
• Completeness
• Accurate
• Consistent
• Timely
Sketch out user and data flow (decision trees)
Test assumptions against prototype
Start with simple mechanism and move to complex
191
Thank You
Marianne Sweeny
Principal
Daedalus Information Systems
sweeny48@uw.edu
@msweeny
192
Embrace, engage, define, direct
APPENDIX
A friendly drop to help you after…
193
Suggested Reading
• Algorithms to Live By; Brian Christian, Tom Griffiths
• Super Intelligence: Paths, Dangers, Strategies; Nick
Bostrum
• The Tides of Mind: Uncovering the Spectrum of
Consciousness; David Gelernter
• The Knowing Project; Michael Lewis
194
Twitter Resources
195
Rob Wortham @RobWortham
Frank Pasquale @FrankPasquale
Luke Robert Mason @LukeRobertMason
Garry Kasparov @Kasparov63
John C. Havens @johnchavens
Joanna Bryson @j2breve, @j2blather
Carol Smith @carologic
Sentiment/Emotion/AI @SentimentSymp
Elizabeth Churchill @xeeliz
Adam Coates @adampaulcoates
Richard @RichardSocher
Yann LeCun @ylecun
Kirk Borne @KirkDBorne
Right Relevance @rightrelevance
Machine Learning @ML_toparticles
Andrew Ng @AndrewYNg
Atsushi HASEGAWA @ahaseg
Eric Horvitz @erichorvitz
Sander Dieleman @sedielem ?
AI Now Institute @AINowInstitute
Oren Etzioni @etzioni
Jeff Dalton @JeffD
Peter Trainor @petetrainor
Rob McCargow @robmccargow
Kevin Slavin @slavin_fpo
Giles Colborne @gilescolborne
Lev Manovich @manovich
Luke Robert Mason @LukeRobertMason
Jana Eggers @jeggers
Dawn Anderson @dawnieando
Colin Eagan @ColinEags
Data Science Central @DataScienceCtrl
Right Relevance @rightrelevance
Machine Learning @ML_toparticles
Brenda Laurel @blaurel
Ian Soboroff @ian_soboroff
Phillip Hunter @designoutloud
Paul Dourish @dourish
Jason Alderman @justsomeguy
manovich @manovich
Dorian Taylor @doriantaylor
Kirk Borne @KirkDBorne
Tim Caynes · @timcaynes.
196

More Related Content

More from Marianne Sweeny

Birds Bears and Bs:Optimal SEO for Today's Search Engines
Birds Bears and Bs:Optimal SEO for Today's Search EnginesBirds Bears and Bs:Optimal SEO for Today's Search Engines
Birds Bears and Bs:Optimal SEO for Today's Search EnginesMarianne Sweeny
 
Search Solutions 2011: Successful Enterprise Search By Design
Search Solutions 2011: Successful Enterprise Search By DesignSearch Solutions 2011: Successful Enterprise Search By Design
Search Solutions 2011: Successful Enterprise Search By DesignMarianne Sweeny
 
Bearish SEO: Defining the User Experience for Google’s Panda Search Landscape
Bearish SEO: Defining the User Experience for Google’s Panda Search LandscapeBearish SEO: Defining the User Experience for Google’s Panda Search Landscape
Bearish SEO: Defining the User Experience for Google’s Panda Search LandscapeMarianne Sweeny
 
Configuring share point 2010 just do it
Configuring share point 2010   just do itConfiguring share point 2010   just do it
Configuring share point 2010 just do itMarianne Sweeny
 
Defining the Search Experience
Defining the Search ExperienceDefining the Search Experience
Defining the Search ExperienceMarianne Sweeny
 
Widj social media-is-not-search-v1-1
Widj social media-is-not-search-v1-1Widj social media-is-not-search-v1-1
Widj social media-is-not-search-v1-1Marianne Sweeny
 
Uw Digital Communications Social Media Is Not Search
Uw Digital Communications Social Media Is Not SearchUw Digital Communications Social Media Is Not Search
Uw Digital Communications Social Media Is Not SearchMarianne Sweeny
 
Sweeny Seo30 Web20 Finalversion
Sweeny Seo30 Web20 FinalversionSweeny Seo30 Web20 Finalversion
Sweeny Seo30 Web20 FinalversionMarianne Sweeny
 
Enterprise Search Share Point2009 Best Practices Final
Enterprise Search Share Point2009 Best Practices FinalEnterprise Search Share Point2009 Best Practices Final
Enterprise Search Share Point2009 Best Practices FinalMarianne Sweeny
 
Share Point2007 Best Practices Final
Share Point2007 Best Practices FinalShare Point2007 Best Practices Final
Share Point2007 Best Practices FinalMarianne Sweeny
 
Univ Washington Social Media Marketing
Univ Washington Social Media MarketingUniv Washington Social Media Marketing
Univ Washington Social Media MarketingMarianne Sweeny
 
Sweeny Seo30 Web20 Final
Sweeny Seo30 Web20 FinalSweeny Seo30 Web20 Final
Sweeny Seo30 Web20 FinalMarianne Sweeny
 
Incentive Architecture 1224362486736986 8
Incentive Architecture 1224362486736986 8Incentive Architecture 1224362486736986 8
Incentive Architecture 1224362486736986 8Marianne Sweeny
 
SEO and IA: The Beginning of a Beautiful Friendship
SEO and IA: The Beginning of a Beautiful FriendshipSEO and IA: The Beginning of a Beautiful Friendship
SEO and IA: The Beginning of a Beautiful FriendshipMarianne Sweeny
 

More from Marianne Sweeny (16)

Birds Bears and Bs:Optimal SEO for Today's Search Engines
Birds Bears and Bs:Optimal SEO for Today's Search EnginesBirds Bears and Bs:Optimal SEO for Today's Search Engines
Birds Bears and Bs:Optimal SEO for Today's Search Engines
 
Search Solutions 2011: Successful Enterprise Search By Design
Search Solutions 2011: Successful Enterprise Search By DesignSearch Solutions 2011: Successful Enterprise Search By Design
Search Solutions 2011: Successful Enterprise Search By Design
 
Bearish SEO: Defining the User Experience for Google’s Panda Search Landscape
Bearish SEO: Defining the User Experience for Google’s Panda Search LandscapeBearish SEO: Defining the User Experience for Google’s Panda Search Landscape
Bearish SEO: Defining the User Experience for Google’s Panda Search Landscape
 
Configuring share point 2010 just do it
Configuring share point 2010   just do itConfiguring share point 2010   just do it
Configuring share point 2010 just do it
 
Defining the Search Experience
Defining the Search ExperienceDefining the Search Experience
Defining the Search Experience
 
Not Your Mom's SEO
Not Your Mom's SEONot Your Mom's SEO
Not Your Mom's SEO
 
Widj social media-is-not-search-v1-1
Widj social media-is-not-search-v1-1Widj social media-is-not-search-v1-1
Widj social media-is-not-search-v1-1
 
Uw Digital Communications Social Media Is Not Search
Uw Digital Communications Social Media Is Not SearchUw Digital Communications Social Media Is Not Search
Uw Digital Communications Social Media Is Not Search
 
Sweeny Seo30 Web20 Finalversion
Sweeny Seo30 Web20 FinalversionSweeny Seo30 Web20 Finalversion
Sweeny Seo30 Web20 Finalversion
 
Search V Next Final
Search V Next FinalSearch V Next Final
Search V Next Final
 
Enterprise Search Share Point2009 Best Practices Final
Enterprise Search Share Point2009 Best Practices FinalEnterprise Search Share Point2009 Best Practices Final
Enterprise Search Share Point2009 Best Practices Final
 
Share Point2007 Best Practices Final
Share Point2007 Best Practices FinalShare Point2007 Best Practices Final
Share Point2007 Best Practices Final
 
Univ Washington Social Media Marketing
Univ Washington Social Media MarketingUniv Washington Social Media Marketing
Univ Washington Social Media Marketing
 
Sweeny Seo30 Web20 Final
Sweeny Seo30 Web20 FinalSweeny Seo30 Web20 Final
Sweeny Seo30 Web20 Final
 
Incentive Architecture 1224362486736986 8
Incentive Architecture 1224362486736986 8Incentive Architecture 1224362486736986 8
Incentive Architecture 1224362486736986 8
 
SEO and IA: The Beginning of a Beautiful Friendship
SEO and IA: The Beginning of a Beautiful FriendshipSEO and IA: The Beginning of a Beautiful Friendship
SEO and IA: The Beginning of a Beautiful Friendship
 

Recently uploaded

TRENDS Enabling and inhibiting dimensions.pptx
TRENDS Enabling and inhibiting dimensions.pptxTRENDS Enabling and inhibiting dimensions.pptx
TRENDS Enabling and inhibiting dimensions.pptxAndrieCagasanAkio
 
Company Snapshot Theme for Business by Slidesgo.pptx
Company Snapshot Theme for Business by Slidesgo.pptxCompany Snapshot Theme for Business by Slidesgo.pptx
Company Snapshot Theme for Business by Slidesgo.pptxMario
 
IP addressing and IPv6, presented by Paul Wilson at IETF 119
IP addressing and IPv6, presented by Paul Wilson at IETF 119IP addressing and IPv6, presented by Paul Wilson at IETF 119
IP addressing and IPv6, presented by Paul Wilson at IETF 119APNIC
 
『澳洲文凭』买詹姆士库克大学毕业证书成绩单办理澳洲JCU文凭学位证书
『澳洲文凭』买詹姆士库克大学毕业证书成绩单办理澳洲JCU文凭学位证书『澳洲文凭』买詹姆士库克大学毕业证书成绩单办理澳洲JCU文凭学位证书
『澳洲文凭』买詹姆士库克大学毕业证书成绩单办理澳洲JCU文凭学位证书rnrncn29
 
办理多伦多大学毕业证成绩单|购买加拿大UTSG文凭证书
办理多伦多大学毕业证成绩单|购买加拿大UTSG文凭证书办理多伦多大学毕业证成绩单|购买加拿大UTSG文凭证书
办理多伦多大学毕业证成绩单|购买加拿大UTSG文凭证书zdzoqco
 
PHP-based rendering of TYPO3 Documentation
PHP-based rendering of TYPO3 DocumentationPHP-based rendering of TYPO3 Documentation
PHP-based rendering of TYPO3 DocumentationLinaWolf1
 
『澳洲文凭』买拉筹伯大学毕业证书成绩单办理澳洲LTU文凭学位证书
『澳洲文凭』买拉筹伯大学毕业证书成绩单办理澳洲LTU文凭学位证书『澳洲文凭』买拉筹伯大学毕业证书成绩单办理澳洲LTU文凭学位证书
『澳洲文凭』买拉筹伯大学毕业证书成绩单办理澳洲LTU文凭学位证书rnrncn29
 
Unidad 4 – Redes de ordenadores (en inglés).pptx
Unidad 4 – Redes de ordenadores (en inglés).pptxUnidad 4 – Redes de ordenadores (en inglés).pptx
Unidad 4 – Redes de ordenadores (en inglés).pptxmibuzondetrabajo
 
Film cover research (1).pptxsdasdasdasdasdasa
Film cover research (1).pptxsdasdasdasdasdasaFilm cover research (1).pptxsdasdasdasdasdasa
Film cover research (1).pptxsdasdasdasdasdasa494f574xmv
 
Top 10 Interactive Website Design Trends in 2024.pptx
Top 10 Interactive Website Design Trends in 2024.pptxTop 10 Interactive Website Design Trends in 2024.pptx
Top 10 Interactive Website Design Trends in 2024.pptxDyna Gilbert
 
SCM Symposium PPT Format Customer loyalty is predi
SCM Symposium PPT Format Customer loyalty is prediSCM Symposium PPT Format Customer loyalty is predi
SCM Symposium PPT Format Customer loyalty is predieusebiomeyer
 

Recently uploaded (11)

TRENDS Enabling and inhibiting dimensions.pptx
TRENDS Enabling and inhibiting dimensions.pptxTRENDS Enabling and inhibiting dimensions.pptx
TRENDS Enabling and inhibiting dimensions.pptx
 
Company Snapshot Theme for Business by Slidesgo.pptx
Company Snapshot Theme for Business by Slidesgo.pptxCompany Snapshot Theme for Business by Slidesgo.pptx
Company Snapshot Theme for Business by Slidesgo.pptx
 
IP addressing and IPv6, presented by Paul Wilson at IETF 119
IP addressing and IPv6, presented by Paul Wilson at IETF 119IP addressing and IPv6, presented by Paul Wilson at IETF 119
IP addressing and IPv6, presented by Paul Wilson at IETF 119
 
『澳洲文凭』买詹姆士库克大学毕业证书成绩单办理澳洲JCU文凭学位证书
『澳洲文凭』买詹姆士库克大学毕业证书成绩单办理澳洲JCU文凭学位证书『澳洲文凭』买詹姆士库克大学毕业证书成绩单办理澳洲JCU文凭学位证书
『澳洲文凭』买詹姆士库克大学毕业证书成绩单办理澳洲JCU文凭学位证书
 
办理多伦多大学毕业证成绩单|购买加拿大UTSG文凭证书
办理多伦多大学毕业证成绩单|购买加拿大UTSG文凭证书办理多伦多大学毕业证成绩单|购买加拿大UTSG文凭证书
办理多伦多大学毕业证成绩单|购买加拿大UTSG文凭证书
 
PHP-based rendering of TYPO3 Documentation
PHP-based rendering of TYPO3 DocumentationPHP-based rendering of TYPO3 Documentation
PHP-based rendering of TYPO3 Documentation
 
『澳洲文凭』买拉筹伯大学毕业证书成绩单办理澳洲LTU文凭学位证书
『澳洲文凭』买拉筹伯大学毕业证书成绩单办理澳洲LTU文凭学位证书『澳洲文凭』买拉筹伯大学毕业证书成绩单办理澳洲LTU文凭学位证书
『澳洲文凭』买拉筹伯大学毕业证书成绩单办理澳洲LTU文凭学位证书
 
Unidad 4 – Redes de ordenadores (en inglés).pptx
Unidad 4 – Redes de ordenadores (en inglés).pptxUnidad 4 – Redes de ordenadores (en inglés).pptx
Unidad 4 – Redes de ordenadores (en inglés).pptx
 
Film cover research (1).pptxsdasdasdasdasdasa
Film cover research (1).pptxsdasdasdasdasdasaFilm cover research (1).pptxsdasdasdasdasdasa
Film cover research (1).pptxsdasdasdasdasdasa
 
Top 10 Interactive Website Design Trends in 2024.pptx
Top 10 Interactive Website Design Trends in 2024.pptxTop 10 Interactive Website Design Trends in 2024.pptx
Top 10 Interactive Website Design Trends in 2024.pptx
 
SCM Symposium PPT Format Customer loyalty is predi
SCM Symposium PPT Format Customer loyalty is prediSCM Symposium PPT Format Customer loyalty is predi
SCM Symposium PPT Format Customer loyalty is predi
 

AI for IAs

  • 1. AI for Information Architects User Experience, Content Strategy and Design 1 Marianne Sweeny IA Conference 2019 Workshop
  • 2. 2 During our day together we will examine What constitutes intelligence Consciousness as part of the human experience Differences between Machine Learning and Artificial Intelligence Where did AI come from? AI issues: safety, ethics and privacy The intersections of AI, IA, user experience and content strategy How to use this knowledge to design for humans and machines
  • 3. Link to Workshop PDF Tiny URl
  • 4. Instructions for Living a Life Pay attention Be astonished Tell about it 4 Mary Oliver
  • 5. Foresight and Hindsight All technological change is a trade-off The advantages and disadvantages of new technologies are never distributed evenly among the population Embedded in every technology there is a powerful idea, sometime two or three ideas Technological change is not additive; it is ecological Media tend to become mythic (Computationalism) 5
  • 6. Why Is This Important? Because they are developing an GUI for AI that you will be able to use to build artificially intelligent interactions 6
  • 7. Why is This Important? (2) Because, unlike the Manhattan Project, there is no governance over who is doing what. 7
  • 8. Why is this Important? (3) Because AI is not infallible yet the consequences are forever 8
  • 9. Why is This Important (4) Because it is our job as information architects, user experience professionals, content strategists, human factors professional 9
  • 10. Why is This Important (5) Quantum Around the Corner Google said it had already devised machine-learning algorithms that work inside the quantum computer, which is made by D-Wave Systems of Burnaby, British Columbia. …The most effective methods for using quantum computation, Google said, involved combining the advanced machines with its clouds of traditional computers.
  • 12. Mai Quality of Information Information (Intelligence) is part of a spectrum Data >> Information >> Knowledge Information quality depends on individual characteristics – Contextual – Situational – Environmental – Emotional Machines use captured personal data
  • 14. Biological System Levels of Reasoning Computable outcome (goal) Steps/instructions to realize outcome (algorithm) Implementation of program (realization of goal) 14
  • 15. Artificial Meaning “Context has always been part of expression because expression become meaningless if context becomes arbitrary…meaning is only ever meaning(ful) in context. … Any gadget, even a big one like Singularity, gets boring after awhile. But a deepening of meaning is the most intense potential kind of adventure available to us.” 15
  • 16. Artificial Context Context becomes what the system can measure • Environmental features • Interactions • Ubiquitous computing • Internet of things (IoT) Non-methodical approach that brings in containment (social through local) interactions • Adaptive/reactive interaction in situ • Context as perceived and used by actor 16
  • 17. Information Cascade A group of agents behaving rationally can fall prey to infinite misinformation • US Vaccination controversy Information Cascade: when rational theory is based on filter bubbles, hive mind Cascade is caused by a misinterpretation of what others think based on external observation of their actions More concerned with judgement fitting existing consensus than the visible facts 17
  • 18. Intelligence Explosion Human-level AI will lead to super human AI • Uncontrolled intelligence explosion without human-level intentionality that is the result of consciousness • Program self-improves to state that exceeds ability for outside control Intelligence here measured by ability to attain goal in most efficient manner 18
  • 19. Information Explosion Components Components • Increased computational resources • Duplicability • Editability • Goal Coordination Accelerators • Hardware capacity • Better algorithms • Massive datasets • Psychology and neuroscience applications • Accelerated science (quantum computing) • Economic incentives (labor $ reduction) 19
  • 20. Computationalism World can be understood by computational processes with humans as sub processes 1st Flavor: logical positivism 2nd Flavor: computer program with features related to self representation and circular references similar to that of a person 3rd Flavor: information structure that can be perceived by some real human to also be a person (Turing Test) 20
  • 21. Solutionism Silicon Valley assumption of a quantifiable self that is the truer self There’s an app for everything False notion that Internet is a coherent and stable influence in our lives Grasping easy digital solutions often ignores complex causes behind Sometimes right algorithms can lead to wrong answers 21
  • 22. Subjectivism John Searle: AI not possible in any way because consciousness is a physical property of the brain that produces a subjective experience Thomas Nagal: computers do not have subjectivity (private landscape with personal experience). Cannot create Subjective Reality • Intangible way to intelligence • Philosophical concept focused on sense of self and components (experience, perspective, belief, emotion, consciousness) • Composed of understanding and intentionality Introspection is key 22
  • 24. Gelernter: Tides of Mind Humans have a knowledge of core concepts related to the physical world = consciousness Consciousness allows for building more robust mental models that enable inference and prediction Key question going unanswered: What is the human mind without the human being? The mind is consciousness (objects & events) plus memory (occurred outside of the mind) Thinking has intuitive meaning tied to consciousness • Perception • Recollection • Idea 24
  • 25. Gelernter on Consciousness “Conscious experiences range from vivid color sensations to experience of the faintest background aromas; from hard-edged pains to the elusive experience of thoughts on the tip of one’s tongue. . . . All these have a distinct experienced quality. . . . To put it another way, we can say that a mental state is conscious if it has a qualitative feel—an associated quality of experience…” 25
  • 26. Consciousness Spectrums Up-spectrum • Live in the present • Outer consciousness: external world (bodies) • Feeds memory up • Thinking is focused, disciplined systematic Down-spectrum • Recall, revisit, reoccupy the past • Dreams are re-experiencing memories in the form of thought • Recollections, ideas 26
  • 27. Magical Thinking Magical thinking = things only imagined become real Cannot “learn” to be creative Creativity is repurposing in a way that software cannot because it involves: • Ignoring limits • Curiosity Inspire but not force creative insight “Where the confines of the waking world blend with those of dreams.” Edgar Allen Poe 27
  • 28. Dreams “Dreams tell us truths that we know but are not brave enough to acknowledge.” Remembering out of control Dreams = emotions + hallucinations 28
  • 29. 29 For too long, emotionhas been cognitive researchers’ third rail. In research on humans, emotions were deemed irrelevant,impossible to study or beneath scientific notice…But nothing could be more essential to understanding how people and animals behave. Sy Montgomery, NYT Mar 3 2019
  • 30. Emotion Primary emotions • Interest • Pleasure • Distress Secondary emotions • Anger • Fear • Disgust • Happiness • Sadness • Surprise Emotional Resonance: ability to feel/echo someone else’s feeling • Empathy • Sympathy • Essential to the human experience 30
  • 31. Emotions (2) Play key role in decision-making, creativity and intelligence (EQ) Sentic Modulation • Facial Expression • Voice: utterance, timing, pitch Learning is the quintessential learning experience 31
  • 32. Affective (emotional) Computing Assumption of small set of emotions to make programming easier Assumes binary nature of emotions (cannot be angry and pleased) Conversational signals • Syntactic displays • Speaker Displays • Listener Response displays Emotionally-basedcomputers • Same emotional ability similar to a dog, neither personal or friendly • Computer voice with intonation and natural expression • Computer perceives emotional state and responds appropriately • Maximized sentic communication between human and computer, personal and “user-friendly” 32
  • 33. MACHINE LEARNING Capacity for system to improve performance through experience, not explicit instructions 33
  • 34. Machine Learning: Programs that act like humans 34
  • 35. Machine Learning A programming approach to problem-solving– composite of not a single algorithm Model of real world using mathematic structure with decision-makingrules Derives rules from a data set Objective function = desiredoutcome Training set with adjusted parameters until goal achieved Test set used to validate accuracy and effectiveness Machine completes an objective without specific instructions 35
  • 36. 36
  • 38. Supervised Learning Uses document-class pairs to indicate proper classes for given documents Used human specialists for classification of “training set” used to “teach” system • Assigns classes to documents • Reviews machine classification performance 6 Algorithm types • Decision Trees • Nearest neighbor • Relevance Feedback • Naïve Bayes • Support Vector Machine • Ensemble 38
  • 39. Probabilistic Machine Learning Probabilistic framework can represent and manipulate uncertainty Requires high capacity for flexibility to allow data to “speak for itself” Universal inference engine using Monte Carlo 39
  • 40. Reinforcement Learning Program learns reward from human feedback then optimizes reward function • Rewards o Sampled o Evaluative o Sequential • Optimized reward function • Reward must be explicit to avoid being “gamed” Issues with tasks and goals • Too complex • Hard to specific • Poorly defined 40
  • 41. Transfer Learning (CS) Reason relationally Requires conceptual representation produced by abstract structural knowledge (that is where we humans come in) Generalizations are transferred to environments that share structures, e.g. mental models 41
  • 42. ARTIFICIAL INTELLIGENCE Solve problems that the mind can solve using derived intelligence instead of a prearranged set of rules 42
  • 43. Generalized Intelligence Spearman coefficient to measure intelligence, correlation measure, if/then G Factor: general level of intelligence possessed by an individual Quantified intelligence represented by a number Used to rank people by IQ 43
  • 44. 44
  • 45. Neuroscience of Algorithms Deep Learning • Distributed interactions • Tuned by learning procedures • Stochastic (random) parallel information processing Convoluted neural networks • Convergent and divergent information flows • Non linear transduction • Maximum-based pooling of inputs 45
  • 46. Semantic Computing Segmented and match instruction Associations to understand human behavior and predict actions Requires semantic matching • Control layer (input) • Semantic mapping layer (ontology) • Device Layer Requires user and behavior models (persona) Semantic reasoning module confirms user intentions 46
  • 48. Deep Learning Components Collection of trainable math units which collaborate to compute complicated functions HUGE raw data training set Results get better with more data, new/better algorithms based on observation and insight Requirements • Scalable • Portable • Reproducible • Extensible • Powerful processing hardware 48
  • 49. AI Types Logical Reasoning Knowledge Representation Planning and Navigation Natural Language Processing Perception 49
  • 50. Embodied Agents Internet of Things Goal driven planning Reactive agents Search 50
  • 51. What AI Best Suited To Search Learning Systems Pattern Recognition Planning Induction 51
  • 52. Search Requires additional structure Near to | Close to expansion Solve for one, Solve for many Personalization 52
  • 53. Learning Systems Use past behavior to predict future action using human planned heuristic methods A reinforced learning model that leads to a secondary reinforcement model that is more autonomous • Reinforcement is reward • Extinction is unlearning Grade on curve of computer’s acquired capability 53
  • 54. Pattern Recognition Ability for computer to act intelligently based on input data with a lot of variability • Decision Trees • Nearest neighbor classification • Neural Networks Classification Ideal replaced by practical 54
  • 55. Planning & Problem-Solving Large assembly of interrelated sub-problems Given a start state and desired outcome state Choose appropriate sub-problems for solving selected problem Success is most efficient set of actions to achieve desired outcome 55
  • 56. Induction Learning by example Derive the rule from set of observed instances Classification key component • A learning system has to be capable of evolving its own class descriptions • The task of constructing class definitions is called induction or concept learning 56
  • 57. AI Models Base Models • Learning • Prediction: create actions to respond to learning Sub modules • Data analysis • User Identification • Behavior recognition • Service construction and provisioning 57
  • 58. Thought Vectors Gregory Hinton – Google Research Fellow Encode thoughts as sequences of numbers (vector) Software learns to recognize patterns in these digital representations “If you take the vector for Paris and subtract the vector for France and add Italy, you get Rome,” he said. “It’s quite remarkable.” Geoffrey Hinton 58
  • 59. Google BERT AI to carry on decent conversation (Turing test) Learn general vagaries of language and apply to specific task Analyzed millions of sentences • Self-published literature • Entire Wikipedia Goal to predict next word and understand the fundamental relationships between words 59 Bio directional Encoder Representations and Tranformer Language
  • 60. Singularity State when humans and machine merge Concept introduced by Ray Kurzweil, Google fellow 6 Epochs of Evolution • Physics & Chemistry • Biology • Brains • Technology • Merger of Technology & Human Intelligence • Universe Integration 60
  • 61. Super Intelligence Traits • Capacity to learn • Capacity to deal with uncertainty • Ability to extract concepts from data and internal state • Ability to leverage acquired concepts for combinatorial representations for logical & Intuitive reasoning • Capacity for unrestrained self-improvement (overwrite its own code) Types • Speed (faster than human mind) • Quality (faster and much smarter than human) • Collective (aggregates performance of lesser intelligences) External governance: None 61
  • 62. And We Go Boldly Into the Whirling Knives* AI might achieve a strategic advantage Orthogonality Thesis: cannot assume that AI would be able to share our biological values • Culture • Kindness • Spiritual enlightenment Instrumental Convergence Thesis: cannot assume that Super AI would be satisfied with a supportive or subservient role Super AI could develop a final goal that is not anthropomorphic 62 Existential Risks
  • 63. Whirling Knives (2) Perverse Instantiation: satisfy goal in a way that violates programmed intent Infrastructure profusion: over consumes resources to achieve more reward Mind crime: AI creates processes with moral states (sentient simulations) 63 Practical Risks
  • 64. 64
  • 67. Turing (1948) Imitation Game Reward Signal increases probability of repetition events leading to it Suitable imperative: one that regulates the order in which the rules of the logical system are applied Program the learning Objections to machine intelligence • Theological • Head in Sand • Mathematical/philosophical • Consciousness • Lady Lovelace 67
  • 68. Dartmouth Summer Research Project (1956) Top AI scientists proposed concentrated effort for AI Focused on • Computer use of language • Neuron Nets • Self improvement • Abstraction • Randomness and creativity Follow up conference held in 2006 68 Human Factors professionals NOT INVITED
  • 69. Perceptron (1957) Developed by Frank Rosenblatt, psychologist at Cornell Artificial neural networks (ANN) = many interconnected processing units for parallel processing Trained not programmed 1. Input addition 2. Comparison with threshold value 3. If threshold met or surpassed, output activation Modifiable connections adjusted according to “learning” algorithm Perceptrons are not without limitations 69
  • 70. Pandemonium (1959) Decision making entity involving 4 “demons” Top Layer decides what information has been presented to the system (discernment) Feed-forward and feedback connections between layers Requirements: • Well defined problem • Unbiased decision making • Single tamper-proof labeling of behavior 70
  • 71. Pandemonium Feed Forward Layers Bottom layer: store data 3rd Layer: select, weigh, filter and pass along data 2nd Layer: “cognitive demons” decide which information from 3rd layer to process Decision layer: single decision demon on what information is presented to the system for processing 71
  • 72. Marvin Minsky (1960) Cognitive computer scientist Co-founder MIT AI Laboratory Symbolic AI With Seymour Papert brought forth a 20 year “AI winter” with criticism of early AI Artificial Neural Network (ANN) approach 72
  • 73. Minsky on What AI Best Suited To Search Learning Systems Pattern Recognition Planning Induction 73
  • 74. Minsky on Creativity “There’s no such thing as “creativity” in the first place. I don’t believe there’s any substantial difference between ordinary thought and creative thought…I’ll argue that this is really not a matter of what’s in the mind of the artist – but what’s in the mind of the critic…” 74
  • 75. ELIZA (1963) 1st Instance of human mediated chatbot Early computer/human conversation (NLP) Heuristic programming • Keyword identified (input) • Sentence transformed according to rule associated with identified keyword • Choose appropriate transformation – if none available, choose most likely/earlier transformation • Generate responses Keyword dictionary contains composition, assembly and decomposition rules 75
  • 76. Eliza and Global Context Global Context is key to understanding Sub-contexts emerge as conversation continues for consequential richness Individual participants bring their own belief structure ELIZA scripts (previous learnings) establish a global context for future “understandings” Broad context framework only 76
  • 77. Dreyfus (1964) Create computer systems with the intelligence and reasoning of an human adult Rationalist assumption of “ordered reality” is flawed Knowledgeable reality itself lacks rational structure Inter-relatedness between humans and the world Human world filled with experience structures- neither subjective or objective AI discovers meaningful structures to apply to meaningful behaviors, independent of fixed rules 77
  • 78. Norvig & Russell (2004) Systems that act like humans (Turing) System that think rationally (logic solvers) Systems that act rationally (perception, NLP, Planning, Navigation) Systems that think like humans (neural) 78
  • 79. Sloman Minsky Model (2000) 79
  • 80. St Thomas Symposium (2004) Need synthesis of methodologies Move from reactive to deliberative thinking Include affective concepts like emotions • Primary • Secondary • Tertiary Incorporate “common sense” thinking Source of human resourcefulness and robustness 80 Human Factors professionals NOT INVITED
  • 81. Asilomar Conference 2009 81 Human Factors professionals NOT INVITED
  • 82. Asilomar AI Research Needs: HCI 82 Human Factors professionals NOT INVITED
  • 83. Stanford 100 Year AI Study (2016) Long term reoccurring study of AI influence on people and society Modeled after Association for Advancement of Artificial Intelligence (AAIA) consortium 2008 4 intended audiences • General public • Industry • Government • AI Researchers 83 Human Factors professionals NOT INVITED
  • 84. 5 Dominant Tribes of AI (2017) Symbolists: logical reasoning Connectionists: structures inspired by human brain Evolutionaries: methods inspired by Darwin theory of evolution Bayesians: probabilistic inferences (google and others) Analogizers: extrapolate from previously seen examples 84
  • 85. Two Schools of AI Symbol Processing Neural nets 85
  • 86. Symbolic AI Intelligence = symbol manipulation Fixed and formal rules Assume: all intelligent processes are forms of information processing Computer processes symbolic representations (1s/0s) according to formal rules (program) Plato’s rationalism GOFAI 86
  • 87. Artificial Neural Networks Connectionism Neural networks made up of input layer, interstitial layers and output layer Good at: pattern recognition, categorization, and behavior coordination Knowledge comes from the connections not symbol interpretation Past experience used to form intelligence in current state Heideggerian AI 87
  • 88. 88
  • 89. Learning Systems Use past behavior to predict future action using human planned heuristic methods Reinforced learning model that leads to a secondary reinforcement model that is more autonomous • Reinforcement is reward • Extinction is unlearning Grade on curve of computer’s acquired capability 89
  • 90. Pattern Recognition Ability for computer to act intelligently based on input data with a lot of variability • Decision Trees • Nearest neighbor classification • Neural Networks Classification Ideal replaced by practical Constant decision what problem to work on • Value based Pandemonium 90
  • 91. Planning & Problem-Solving Large assembly of interrelated sub-problems Choose appropriate sub-problems for solving selected problem Logic Theory: prove theorem using heuristics: • Similarity test • Simplicity test • Strong non-provability test Heuristic programming 91
  • 92. Heuristic Programming Early training for AI Self-learning • Substitutes machine learning for logic algorithms • Ranks alternative in a branching decision trea Achieves an approximate of the exact solution ELIZA 92
  • 93. Optimal Stopping Computer science problem Stop too early and you miss a good candidate Stop too late and you miss a good candidate waiting for perfection that doesn’t exist Threshold rule: establish a optimal stopping point and take the first candidate above that percentile Establish a “period of no decision” – predetermined amount of time for looking then a leap phase of commit 93
  • 94. Explore | Exploit Tradeoff Explore: gathering information Exploit using the information gathered to produce a good result Value of explore declines over time Value of exploit increases over time Exploration has inherent value of finding the best candidate “To live in a restless world requires a certain restless in oneself…you must never fully cease exploring.” p.54 94
  • 95. The Principle of Beneficence Philosophic concept tied to ethics Condition of “do no harm” in medicine Possible harm to few in order to benefit many Philippa Foot and moral dilemmas (train switch scenario) Who decides the winners and losers of AI?
  • 97. AI ISSUES 97 The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka’ but “that’s funny…” Isaac Asimov
  • 98. Learning is one Thing…Thinking Another “In designing software and microprocessors, I have never had the feeling that I was designing an intelligent machine. The software and hardware is so fragile and the capabilities of the machine to “think” so clearly absent that even as a possibility, this has always seemed very far in the future…My personal experience suggest we tend to over estimate our design abilities.” 98
  • 99. Sometimes They Learn the Wrong Things 99
  • 100. Sometimes They Get Things Wrong 100
  • 101. Sometimes They Do the Wrong Thing 101
  • 102. Sometimes They Build the Wrong Things? Built as a proof of concept for AI gone wrong with biased data MIT AI Lab Dataset was a sub-reddit dedicated to document the “disturbing reality of death.” 102
  • 103. 103
  • 105. User Metrics Training Data Frequency of access Click-through (selection from results set) Time on site Pages per session Bounce Rate Conversion (fulfilled information need) Profile data 105
  • 106. Implicit Collection Implicit (max precision 58%) • Software agents • Logins • Enhanced proxy servers • Cookies • Session IDs Gathered without user awareness from behavior • Query context inferred • Profile inferred • Less accurate • Requires a lot of data 106
  • 107. Explicit Collection Explicit (max precision 63%) • HTML forms • Explicit user feedback interaction (early Google personalization with More Like This) Provided by user with knowledge More accurate as user shares more about query intent and interests 107
  • 108. What Constitutes a User Profile Information types • Demographic • Interests (short & long-term) • Preferences Profiles are dynamic and iterate over time Represented as • Set of weighted keyword • Weighted concepts • Semantic network 108
  • 109. Google on Privacy (2007) “There was a small trade off on privacy but they’re going to get dramatically better search results. That was something that made sense to us over time.” Melissa Mayer VP User Experience Google 109
  • 110. Google on Privacy Now (2019) https://twitter.com/jason_kint/status/11054840 10183188480 110
  • 111. What Google Collects Implicit Use information Device information Log information Unique application information Local storage Cookie data Explicit Location information Profile information 111
  • 112. What Google Collects Profile information Use information Device information Log information Location information Unique application information Local storage Cookie data 112
  • 113. Methods Client Side: gather data from user profile Server Side: gather data from system usage (logs) Group-ization: Recommender system with vested interest Member data used to rank the individual results • Relevance weigh enhanced with more members of group who “like” resource • Sum of personalization scores of each group member 113
  • 114. Google Personalization Tracks • What is selected • Level of interaction • What is not-done (bounce rate) Signals • Location • Search history Less specific queries benefit the most as they require the additional context provided by personalization 114
  • 115. Facebook Security Lapses 2009: User information made public without permission 2014: manipulated news feed to see if the system could assess mood 2018: Revealed Cambridge Analytica sold FB data 115
  • 116. Prediction Drawbacks AI algorithms rely on past behavior to predict future behavior Programming and test set must define “normal” for the system to detect “abnormal” Cannot predictwhat has not already occurred • Taleb’s black swans • Flash Crash of 2009 Past behavior predictionignores present environment and emotional influences Must define normal to program for abnormal detection 116
  • 117. Privacy Paradox Privacy risk is weighed against value of object, interaction, end result • Research assumes user calculates an internalized value • Basis for choice to reveal personal identification information (PII) Value is determined by the smoothness of the interaction (Groupon, Amazon Local) • Value proposition overrides security/privacy concerns Higher level of user control over PII reduces the perception of risk 117
  • 118. Tim Cook on Privacy Called on US to pass comprehensive data security act along the lines of GDPR 4 guiding Principles • Right to have personal data minimized • Right to know what is being collected and why • Right to data security • Right to access 118
  • 119. If you’re not paying for it, YOU are the product 119
  • 121. AI Ethics = TL2 121 Popular search engine returns 215,000,000 results for AI ethics
  • 122. “Algorithms are opinions embedded in code.” Cathy O’Neill Weapons of Math Destruction (2016) 122
  • 123. Algorithmic Bias Technology inherits ideas and values of the group that develops it Algorithm development rests on emotional capitalism • Emotional capitalism: feeling can be managed rationally and governed by logic • Emotional socialism: suffering is unavoidable and should be tolerated Accept decisions from an automated system as agnostic 3 types • Implicit (absorbed automatically • Accidental (introduced by ignorance • Deliberate 123
  • 124. Algorithmic Bias Technology inherits ideas and values of the group that develops it Algorithm development rests on emotional capitalism Accept decisions from an automated system as agnostic 124
  • 125. Governance Issues Explanation (transparency) • Core components • Local Explanation: explain for specific decision, not system as a whole • Counterfactual Faithfulness: expect the explanation to be causal and can be provided without providing contents of the system • Provide in situations where a person would be required to do so Regulation • Regulators don’t understand what they are regulating • Risk of stifling innovation Applications (consistency) • Impact beyond decision-maker • Know if AI behaving erroneously 125
  • 126. Accountability Under Law Explanation • Core components • Local Explanation: explain for specific decision, not system as a whole • Counterfactual Faithfulness: expect the explanation to be causal and can be provided without providing contents of the system • Provide in situations where a person would be required to do so Regulation • Regulators don’t understand what they are regulating • Risk of stifling innovation Consistency of Application • Impact beyond decision-maker • Know if AI behaving erroneously 126
  • 127. Bias Remedies Design thinking HCI heuristics as well as performance benchmarks HCI professionals testing prior to live site deployment Diversity/bias audits Accountability 127
  • 128. 128
  • 129. Explainable AI (xAI) xAI = field of research addressing interpretability and explain-ability in ML and AI • Compliance with relevant legislation • Broader range of debugging • Those working on system learn from it • Enhanced trust in system decision-making (including scenarios where it can break down AI is a black box for those outside of computer science AI development must shift from ad-hoc models for decision-making that is more trustworthy • Contrastive (present alternative data points) • Counter-factual (changes in features that would lead to a different outcome) 129
  • 130. UK Parliament AI CoC 130 Application of a cross-sector code for the development of AI applications • Artificial intelligence should be developed for the common good and benefit of humanity. • Artificial intelligence should operate on principles of intelligibilityand fairness. • Artificial intelligence should not be used to diminish the data rightsor privacy of individuals, families or communities. • All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence. • The autonomouspower to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
  • 131. AI NOW Initiative (2018) Kate Crawford (Microsoft) and Meredith Whittaker(Google) Founded to deal with issues of AI diversity and inclusion Conduct empirical studies focused on • Bias and inclusions • Labor and automation • Infrastructure and Safety • Basic rights and liberties 131
  • 133. Not Good AI Used in for negative outcomes • Autonomous weapons • Biased facial recognition Used for malicious purposes • Fake news • Denial of attack 133
  • 134. Generative Adversarial Networks Dueling neural networks • 1 to generate an image from a data set • 1 to determine if the image came from the data set AI cop and counterfeiter game of cat and mouse 134
  • 135. AI Risks Mis-specified Objectives Negative Side Effects that extend to wider application Hacking: rewards, devices Bad extrapolation of the real world Poor training data Privacy Fairness Abuse Transparency 135
  • 136. AI Risk Mitigations Define impact regulator • Future state • Substitutes lower impact null actions Train impact regulator • Over many tasks • Separate training parameters for task side effects Penalize influence • Use information-theoretic measures to capture agent’s potential for information • Penalize empowerment Provide scalable oversight with multi-agent approach 136
  • 137. AI Risk Mitigations 2 Use Objective functions to capture designer informal intent • No partially observed goals • Concrete, not abstract rewards • Deep correlation between tasks and functions Feedback loops • Model look ahead • Reward capping • Counter example resistance – combination of rewards 137
  • 138. AI Risk Mitigations 3 Safe exploration • Risk sensitive performance criteria • Use demonstration • Simulated exploration Well defined models • Train on multiple distributions • Program for out-of-distribution situations 138
  • 140. DESIGN FOR MACHINES & HUMANS The real meets the artificial 140
  • 141. Human-centered design has expanded from the design of objects (industrial design) to the design of experiences (encompassing interaction design, visual design and the design of spaces). The next step will be the design of system behavior; the design of algorithms that determine the behavior of automated intelligent systems Harry West CEO, Frog Design 141
  • 142. Machines Users Are Different Logic: exacting, context independent, conditional logic Development: uses explicit rules to define possible behaviors • Heuristics • Intuition derived from huge data sets 142
  • 144. Information Architecture and AI Problem definition and structure Connections Proto-typicality (mental models) Visual complexity (rely on text more than images) 144
  • 145. Form IA and AI Strategies Customer Empathy Framework • Define the problem • Formulate the solution • Map the environment (customer journey) Tools • Personas (use cases) • Problem statements • Environment description (include systems and processes) • Success benchmark success (quantitative, qualitative) 145
  • 146. Create Meaningful Structures Site Structure • Machine readable text • Related content model • Schema markup Internal linking to reinforce context relationships and discovery 146
  • 147. Structured Data Name the components on the page for the machine user 147
  • 148. Navigation for AI 148 Name object for cross system compatibility Move toward the center of project Create deliverables that bridge the logical world of IAs and the physical world of implementers Converse with other disciplines in language they understand and employ
  • 150. How UX Professionals Defined UX? A consequence of a user’s internal state, (predispositions,expectation, needs,motivations,mood, etc.) the characteristics of the designed system (complexity,purpose,usability, functionality,etc.) and the context (or the environment) within which the interaction occurs (organization/social setting, meaningfulnessof activity, voluntariness of use,etc.) 150
  • 151. Key UX Data Points Conversions Unique Visitors Bounce rate Social Actions Number of Pages/visited Average time on page (exclude bounces) Exit rate 151
  • 152. Panda Algorithm Negative Signals High % of deep content Low amount of original content High amount of ads or gratuitous images Large quantity of boiler-plate text Over-optimized (too many links) High bounce rate Low visit duration Low CTR from Google search results No/Low quality in-links No/Low social mentions 152
  • 153. Google Optimal Page Layout 153
  • 154. Observed Self better than Quantified Self 154
  • 155. Use a Different Pattern Library Visitor search patterns: Use online tools to uncover customer intent Visitor behavior patterns: website analytics Visitor conversion patterns: content to address all stages of conversion funnel Tools • Search suggest scrapers • SEO|Content Marketing software • Webmaster and website analytics accounts 155
  • 157. Content & Context Algorithms Hypertext (HITS) Induced Topic Search Hilltop Topic Sensitive PageRank Orion (2008) Hummingbird 157
  • 158. HITS (1997) Hypertext Induced Topic Search HITS is a related algorithm for Authority determination HITS = PageRank + Topic Distillation Unlike PR, query dependent Somewhat recursive 158
  • 160. Hilltop Algorithm (2001) Topic segmentation algorithm = query dependent Introduces concept of non-affiliated “expert documents” to HITS Quality of links more important than quantity of links Segmentation of corpus into broad topics Selection of authority sources within these topic areas 160
  • 161. Latent Semantic Indexing Using a ~<search term> will initiate Google’s LSI and produce a list of results that contains your original term as well as documents that the search engine determines are relevant to your query. 161
  • 162. Topic-Sensitive PageRank (2002) Context sensitive relevance ranking based on a set of “vectors” and not just incoming links Pre-query calculation of factors based on subset of corpus Context of term use in document Context of term use in history of queries Context of term use by user submitting query Based on 16 top-level Open Directory categories 162
  • 163. Orion Algorithm (2008) Purchased by Google in April 2006 for A LOT of money Results include expanded text extracts from the websites Integrates results from related concepts into query results 163
  • 164. Hummingbird: Entity detection Comparison of search query to general population search behavior around query terms Revises query and submits both to search index • Confidence score • Relationship threshold • Adjacent context • Floating context • Results a consolidation of both queries 164
  • 165. AI Content Components Traditional IR (tf*idf) Link analysis for Authority Location on page Query type Content Qualities • Uniqueness • Authoritative • Freshness • Well Written 165
  • 166. Transform Keywords Into Intelligence Keywords are user queries Queries represent user information needs and satisfaction threshold Keywords become intelligence • Competitive: who is doing better • Visibility:how do the search engines see my content • Customer: how do targeted customers look for my products and Services Tools • Search suggest scrapers • Google Trends • SEO Software (BrightEdge, SEMrush) 166
  • 167. Establish Context Context becomes what the system can measure • Environmental features • Interactions • Ubiquitous computing • Internet of things (IoT) • Digital Assistants Non-methodical approach that brings in containment (social through local) interactions • Adaptive/reactive interaction in situ • Context as perceived and used by actor 167
  • 168. Create and Curate Content Entities Rule Newspaper model Opening paragraphs most important for subject determination Relational content model 168
  • 169. User interest mapped to customer journey and content type 169 Keywords Customer Journey Phase AVG CAT Rank _Monthly Searches Average of Competitor Rank Consideration 32 73,200 #DIV/0! Used 30 47,690 #DIV/0! Sale 35 21,120 #DIV/0! Product Information 29 2,990 #DIV/0! Competitor 68 1,150 #DIV/0! Quick Answer 32 110 #DIV/0! Rental 6 50 #DIV/0! Parts 56 40 #DIV/0! Reviews 40 #DIV/0! Competitor Rental 55 10 #DIV/0! Purchase 39 2,980 #DIV/0! Sale 42 2,690 #DIV/0! Used 38 260 #DIV/0! Product Information 14 30 #DIV/0! Awareness 12 2,820 #DIV/0! Used 17 1,640 #DIV/0! Competitor 880 #DIV/0! Product Information 9 250 #DIV/0! Quick Answer 30 #DIV/0! Sale 7 20 #DIV/0! Post Purchase 30 30 #DIV/0! Product Information 30 20 #DIV/0! Used 10 #DIV/0! Grand Total 32 79,030 #DIV/0!
  • 170. Map Semantic Connections Semantic technology requires everything to be associated to understand user activity • Control layer • Mapping (semantic) layer • Device layer Semantic analysis model • Semantic layering • Semantic mapping (Boiko IAS 2018) • Semantic machine heterogeneity Association between user behavior patters (customer journey map) 170
  • 171. Give Users What They Want 171 # of pages in directory # of pages views for each directory
  • 172. Exercise: Develop AI Application for Crisis Help Line 1. Choose a model I. Base Models I. Learning II. Prediction: create actions to respond to learning II. And a sub-models I. Data analysis II. User Identification III. Behavior recognition IV. Service construction, Service Provisioning 2. Define objective function 3. Train system by adjusting parameter’s (reward) to maximize objective function 4. Test to evaluate accuracy and effectiveness of the model 172
  • 174. Data Driven Design Without a person at (or near) the helm who thoroughly understands the principles and elements of Design, a company eventually runs out of reasons for design decisions... When a company is filled with engineers, it turns to engineering to solve problems. Reduce each decision to a simple logic problem. Remove all subjectivity and just look at the data. Data in your favor?... And that data eventually becomes a crutch for every decision, paralyzing the company and preventing it from making any daring design decisions. 174
  • 175. Generative Design AKA Mutative Design, Parametric Design Designer defines rules for algorithm Algorithm generates variations using the predefined rules Algorithm filters the results based on design quality and requirements Designer chooses the best variants and polishes as needed System runs A|B tests for variant(s) Test results are used to choose most effective design 175
  • 176. Privacy By Design Opt in / Opt out User control over sharing – notifications, time limits Command user attention for privacy decisions 176
  • 177. Not Privacy by Design 177
  • 178. Visual Complexity & Prototypicality 178
  • 180. AI Design According Computer Science Components • Variables • Domains (environment) • Constraints (limits) Goal of AI Design = satisfy constraints Admissible heuristic: if it costs too much to reach solution state then revise or reject 180
  • 181. Design Thinking for Data Science 1 Reach out to the development staff Embrace design thinking Transform “my idea” into “our idea” with early stage collaboration 181
  • 182. Design Thinking for Data Science 2 Customer Empathy Stage • Understand the problem solving • Define the solution • Map the environment (customer journey) • Define the characteristics of a good solution (heuristics) Outputs • Personas (use cases) • Problem statements • Environment description (include systems and processes) • Benchmark success (quantitative, qualitative) 182
  • 183. Design Thinking for Data Science 3 Go Broad, Go Deep Stage Brainstorm solution ideas across silos Diversify contributors Post all artifacts and review as a group Organize ideas into themes Include “leap of faith” assumptions Take the best and formulate a solution hypothesis 183
  • 184. Design Thinking for Data Science 4 Rapid experimentation with Customers Paper prototyping, sketches, storyboard Build stable testing methodology into plan Start small (project | testing) to achieve collective wins 184
  • 185. Smart Tools & Platforms Semantic image segmentation Font recognition Intelligent audience segmentation 185
  • 186. Data Protection by Design Design strategy for accountability • Enforceable policy • Demonstrated Compliance Detect and address bias Components • High-level design goals • Privacy enhancing technology (user controls) • Sanitation of data Principle of Accountability Discrimination Aware Data Mining (DADM) 186
  • 187. Privacy Design Recommendations Different UI for different tasks Opt in, not opt out Build in alerts if system deviates from the norm Clear explanation of system decision making methods and reasoning workflow (xAI) Government enforced standards of data collection and control 187
  • 188. 188
  • 189. Algorithm-Based Design 1 Designer as art director, algorithm as apprentice Determine “well designed” site for learning model Create mood board for algorithm to deconstruct Use algorithm for simple tasks • Color match up • Image assembly (movie poster app) • Styling videos • Extract usage patters from data sets 189
  • 190. Algorithm-Based Design 2 Designer and Developer define the logic to consider content, context and user data AEM (behavior targeted UI) Brightedge DataMind Vox Media Homepage Generator 190
  • 191. Machine Learning Design Process Define learning problem • Inputs • Outputs • Types of training data needed Generate good data • Completeness • Accurate • Consistent • Timely Sketch out user and data flow (decision trees) Test assumptions against prototype Start with simple mechanism and move to complex 191
  • 192. Thank You Marianne Sweeny Principal Daedalus Information Systems sweeny48@uw.edu @msweeny 192 Embrace, engage, define, direct
  • 193. APPENDIX A friendly drop to help you after… 193
  • 194. Suggested Reading • Algorithms to Live By; Brian Christian, Tom Griffiths • Super Intelligence: Paths, Dangers, Strategies; Nick Bostrum • The Tides of Mind: Uncovering the Spectrum of Consciousness; David Gelernter • The Knowing Project; Michael Lewis 194
  • 195. Twitter Resources 195 Rob Wortham @RobWortham Frank Pasquale @FrankPasquale Luke Robert Mason @LukeRobertMason Garry Kasparov @Kasparov63 John C. Havens @johnchavens Joanna Bryson @j2breve, @j2blather Carol Smith @carologic Sentiment/Emotion/AI @SentimentSymp Elizabeth Churchill @xeeliz Adam Coates @adampaulcoates Richard @RichardSocher Yann LeCun @ylecun Kirk Borne @KirkDBorne Right Relevance @rightrelevance Machine Learning @ML_toparticles Andrew Ng @AndrewYNg Atsushi HASEGAWA @ahaseg Eric Horvitz @erichorvitz Sander Dieleman @sedielem ? AI Now Institute @AINowInstitute Oren Etzioni @etzioni Jeff Dalton @JeffD Peter Trainor @petetrainor Rob McCargow @robmccargow Kevin Slavin @slavin_fpo Giles Colborne @gilescolborne Lev Manovich @manovich Luke Robert Mason @LukeRobertMason Jana Eggers @jeggers Dawn Anderson @dawnieando Colin Eagan @ColinEags Data Science Central @DataScienceCtrl Right Relevance @rightrelevance Machine Learning @ML_toparticles Brenda Laurel @blaurel Ian Soboroff @ian_soboroff Phillip Hunter @designoutloud Paul Dourish @dourish Jason Alderman @justsomeguy manovich @manovich Dorian Taylor @doriantaylor Kirk Borne @KirkDBorne Tim Caynes · @timcaynes.
  • 196. 196