Natasha Jaques - Learning via Social Awareness - Creative AI meetup
Sep. 21, 2018•0 likes
2 likes
Be the first to like this
Show More
•625 views
views
Total views
0
On Slideshare
0
From embeds
0
Number of embeds
0
Download to read offline
Report
Technology
This talk by Natasha Jaques from MIT Media Lab on "Learning via Social Awareness: Improving a deep generative sketching model with facial feedback" was presented on 10th September 2018 at IDEA London as part of the Creative AI meetup.
Natasha Jaques - Learning via Social Awareness - Creative AI meetup
Learning via Social Awareness
Improving a deep generative sketching
model with facial feedback
Natasha Jaques, Jennifer McCleary, Jesse Engel, David Ha, Fred Bertsch,
Douglas Eck, Rosalind Picard
Humans learn through implicit social feedback
● Emotion recognition important to
cognitive development (Kujawa
et al., 2014)
● Social learning theory (Bandura
& Walters, 1977)
● Social learning -> cultural
evolution (van Schaik & Burkart,
2011)
Making an AI agent socially aware will make it smarter
Alexa, what’s the right
way to Walmart?
The right way to spell
Walmart is W-A-L-M-A-R-T
Ugh...
Better not do
that again...
Project idea
Generate samples from a deep learning model,
show them to users
Detect user’s facial expression response
Improve the model using social feedback
What would generate a facial expression response?
Music / melodies
Style transfer
SketchesGAN images
Text
(dialog / poems
/ stories)
Youtube
recommendations
Magenta models exist ????
MemesJokes
UX research - the good news
Average contentment; r=.58, p < .001
Average amusement; r=.54, p < .002
Average concentration; r=-.58, p < .001
Max concentration; r=-.40, p < .05
*Must
normalize
within each
user first*
PerceivedqualityPerceivedquality
PerceivedqualityPerceivedquality
The bad news
● High variance between users
○ Resting “concentration” face
● Extremely noisy
● Some people do not emote
The bad news
● Users don’t just smile at good sketches
● Significant correlations between the # sketches viewed and emotions
o Sadness goes up: r(751) = .248, p < .001
o Concentration goes down: r(751) = -.158, p < .001
D
G
Z ~ N(0,I)
Latent constraints model
Step 1:
Collect data
Sketch RNN
VAE decoder
Step 2:
Train discriminator
Z
Step 3:
Train generator
D
Z’
Z ~ N(0,I)
+ + - - -
Latent constraints results
N = 63
N = 68
Sketch RNN prior:
Latent constraints:
Sketch RNN prior:
Latent constraints:
Latent constraints evaluation
● Randomly interspersed
● Double blind
● In “the wild”
● Sampled 100s of sketches from the latent constraints models
and the prior
Conclusion
First paper to show that a deep learning model can be improved
with implicit social reactions
• Demonstrated that a deep generative model producing creative
content can be improved with facial expressions
• Showed a link between human facial expressions and their
preferences
Training with Reinforcement Learning (RL)
● Convert Sketch RNN to a discrete version to enable Q-learning
● Model the reward over time, train a supervised model to approximate
● Deep RL from human faces
DiscretizedGround truth 100 200 300 400 ….
Reward