Your SlideShare is downloading. ×
0
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Slide 1
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Slide 1

333

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
333
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
9
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Here we are trying to distinguish between boredom, confusion, delight, frustration, and neutral
  • Transcript

    • 1. Art Graesser Psychology, Computer Science, & the Institute for Intelligent Systems Learning with Conversational Agents
    • 2. Learning, Language, and Communication Technologies <ul><li>Tutoring </li></ul><ul><li>Language & discourse </li></ul><ul><li>Robotic androids </li></ul>
    • 3. Overview <ul><li>Why conversational agents? </li></ul><ul><li>AutoTutor </li></ul><ul><ul><li>Learning </li></ul></ul><ul><ul><li>Motivation </li></ul></ul><ul><ul><li>Emotions </li></ul></ul><ul><li>Other learning environments with agents </li></ul>
    • 4. Exploring a Sea of Animated Conversational Agents SI Agent BEAT Leonardo PKD Android Casey iSTART TLTS Spark MRE AutoTutor Adele STEVE Carmen iMAP SI Agent Laura
    • 5. Why conversational agents? <ul><li>Social presence </li></ul><ul><li>The universal interface </li></ul><ul><li>Simulate and model social and conversational interactions </li></ul><ul><li>Precise control over the language and discourse (unlike humans) </li></ul><ul><li>Understand psychological and pedagogical mechanisms by building them </li></ul>
    • 6. Agent Configurations <ul><li>Dialogs </li></ul><ul><ul><li>Tutor agent </li></ul></ul><ul><ul><li>Teachable agent </li></ul></ul><ul><li>Trialogs </li></ul><ul><ul><li>Vicarious learning from agent dyad </li></ul></ul><ul><ul><li>Tutor-agent, student-agent, human </li></ul></ul><ul><li>Quadralogs </li></ul><ul><ul><li>Human interacting with student-agent while observing an agent dyad </li></ul></ul>
    • 7. <ul><li>Art Graesser (PI) </li></ul><ul><li>Zhiqiang Cai </li></ul><ul><li>Don Franceschetti </li></ul><ul><li>Stan Franklin </li></ul><ul><li>Barry Gholson </li></ul><ul><li>Xiangen Hu </li></ul><ul><li>David Lin </li></ul><ul><li>Max Louwerse </li></ul><ul><li>Andrew Olney </li></ul><ul><li>Natalie Person </li></ul><ul><li>Vasile Rus </li></ul><ul><li>Learn by conversation in natural language </li></ul><ul><li>D’Mello, S.K., Picard, R., & Graesser, A.C. (2007). Toward an affect-sensitive AutoTutor. IEEE Intelligent Systems, 22 , 53-61. </li></ul><ul><li>Graesser, A.C., Jackson, G.T., & McDaniel, B. (2007). AutoTutor holds conversations with learners that are responsive to their cognitive and emotional states. Educational Technology, 47 , 19-22. </li></ul><ul><li>VanLehn, K., Graesser, A.C., Jackson, G.T., Jordan, P., Olney, A., & Rose, C.P. (2007). When are tutorial dialogues more effective than reading? Cognitive Science , 31 , 3-62. </li></ul>What is AutoTutor?
    • 8. AutoTutor <ul><li>Talking head </li></ul><ul><li>Gestures </li></ul><ul><li>Synthesized speech </li></ul><ul><li>Presentation of the question/problem </li></ul><ul><li>Dialog history with </li></ul><ul><li>tutor turns </li></ul><ul><li>student turns </li></ul>Student input (answers, comments, questions)
    • 9. Expectations and misconceptions in Sun & Earth problem <ul><li>EXPECTATIONS </li></ul><ul><li>The sun exerts a gravitational force on the earth. </li></ul><ul><li>The earth exerts a gravitational force on the sun. </li></ul><ul><li>The two forces are a third-law pair. </li></ul><ul><li>The magnitudes of the two forces are the same. </li></ul><ul><li>MISCONCEPTIONS </li></ul><ul><li>Only the larger object exerts a force. </li></ul><ul><li>The force of earth on sun is less than that of the sun on earth. </li></ul>
    • 10. When a car without headrests on the seats is struck from behind, the passengers often suffer neck injuries. Why do passengers get neck injuries in this situation? Question Head Parameter Controls Describe what happens Simulation
    • 11. What does AutoTutor do? <ul><li>Asks questions and presents problems </li></ul><ul><li>Evaluates meaning of the learner’s answers </li></ul><ul><li>Gives feedback on answers </li></ul><ul><li>Display facial expressions with emotions </li></ul><ul><li>Nods and gestures </li></ul><ul><li>Hints </li></ul><ul><li>Prompts for specific information </li></ul><ul><li>Adds information that is missed </li></ul><ul><li>Corrects bugs and misconceptions </li></ul><ul><li>Answers student questions </li></ul><ul><li>Guides the learner in interactive 3-D simulations </li></ul>
    • 12.  
    • 13. Expectation and Misconception-Tailored Dialog: Pervasive in AutoTutor & human tutors <ul><li>Tutor asks question that requires explanatory reasoning </li></ul><ul><li>Student answers with fragments of information, distributed over multiple turns </li></ul><ul><li>Tutor analyzes the fragments of the explanation </li></ul><ul><ul><li>Compares to a list of expected good idea units </li></ul></ul><ul><ul><li>Compares to a list of expected errors and misconceptions </li></ul></ul><ul><li>Tutor posts goals & performs dialog acts to improve explanation </li></ul><ul><ul><li>Fills in missing expected good idea units (one at a time) </li></ul></ul><ul><ul><li>Corrects expected errors & misconceptions (immediately) </li></ul></ul><ul><li>Tutor handles periodic sub-dialogues </li></ul><ul><ul><li>Student questions </li></ul></ul><ul><ul><li>Student meta-communicative acts (e.g., What did you say?) </li></ul></ul>
    • 14. 5-step Tutoring Frame (Graesser & Person, 1994) <ul><li>Tutor asks difficult question </li></ul><ul><li>Initial student answer </li></ul><ul><li>Tutor short feedback (+, -, neutral) </li></ul><ul><li>Tutor and student collaboratively improve the answer via a multi-turn dialogue </li></ul><ul><li>Tutor confirmation of student understanding (often by asking students if they understand) </li></ul>
    • 15. Dialog Moves During Steps 2-4 <ul><li>Positive immediate feedback: “Yeah” “Right!” </li></ul><ul><li>Neutral immediate feedback: “Okay” “Uh huh” </li></ul><ul><li>Negative immediate feedback: “No” “Not quite” </li></ul><ul><li>Pump for more information: “What else?” </li></ul><ul><li>Hint: “What about the earth’s gravity?” </li></ul><ul><li>Prompt for specific information: “The earth exerts a gravitational force on what?” </li></ul><ul><li>Assert: “The earth exerts a gravitational force on the sun.” </li></ul><ul><li>Correct: “The smaller object also exerts a force. ” </li></ul><ul><li>Repeat: “So, once again, …” </li></ul><ul><li>Summarize: “So to recap,…” </li></ul><ul><li>Answer student question: </li></ul>
    • 16. Managing One AutoTutor Turn <ul><li>Short feedback on the student’s previous turn </li></ul><ul><li>Advance the dialog by one or more dialog moves that are connected by discourse markers </li></ul><ul><li>End turn with a signal that transfers the floor to the student </li></ul><ul><ul><li>Question </li></ul></ul><ul><ul><li>Prompting hand gesture </li></ul></ul><ul><ul><li>Head/gaze signal </li></ul></ul>
    • 17. <ul><li>What about Learning Gains? </li></ul>
    • 18. LEARNING GAINS OF TUTORS (effect sizes) <ul><li>.42 Unskilled human tutors </li></ul><ul><li>(Cohen, Kulik, & Kulik, 1982) </li></ul><ul><li>.80 AutoTutor (14 experiments) </li></ul><ul><li>(Graesser and colleagues) </li></ul><ul><li>1.00 Intelligent tutoring systems </li></ul><ul><li>PACT (Anderson, Corbett, Aleven, Koedinger) </li></ul><ul><li>Andes, Atlas (VanLehn) </li></ul><ul><li>Diagnoser (Hunt, Minstrell) </li></ul><ul><li>Sherlock (Lesgold) </li></ul><ul><li>(2.00??) Skilled human tutors </li></ul><ul><li>(Bloom, 1987) </li></ul>
    • 19. Learning Conceptual Physics (VanLehn, Graesser et al., 2007) <ul><li>Four conditions: </li></ul><ul><li>Read Nothing </li></ul><ul><li>Read Textbook </li></ul><ul><li>AutoTutor </li></ul><ul><li>Human Tutor </li></ul>
    • 20. How does AutoTutor compare to comparison conditions on tests of deep comprehension? <ul><li>0.80 sigma compared to pretest, doing nothing, and reading the textbook </li></ul><ul><li>0.22 compared to reading relevant textbook segments </li></ul><ul><li>0.07 compared to reading succinct script </li></ul><ul><li>0.13 compared to AutoTutor delivering speech acts in print </li></ul><ul><li>0.08 compared to humans in computer-mediated conversation </li></ul><ul><li>-0.20 compared to AutoTutor enhanced with interactive 3D simulation </li></ul><ul><li>ZONE OF PROXIMAL DEVELOPMENT </li></ul>
    • 21. Force equals mass times acceleration Pretest Essay Pretest MC Training Posttest Essay Posttest MC All-or-none Learning X 00 X X 0 XXXX 0 XXX XXX 0 XXX 0 XX XX 0 X 0 X X 0 XX 0 X 1 XX 1 X 1 X 1 XX 1 XX 1 XXXX X 11 XXXXXX 1 XX 1 XXX Variable Learning X 10 X X 0 XXXX 0 XXX XXX 1 XXX 0 XX XX 0 X 0 X X 0 XX 1 X 1 XX 0 X 1 X 0 XX 1 XX 0 XXXX X 11 XXXXXX 0 XX 1 XXX No Learning X 00 X X 0 XXXX 1 XXX XXX 0 XXX 0 XX XX 1 X 0 X X 0 XX 0 X 1 XX 0 X 1 X 0 XX 0 XX 0 XXXX X 10 XXXXXX 0 XX 1 XXX Refresher Learning X 00 X X 0 XXXX 1 XXX XXX 1 XXX 1 XX XX 1 X 1 X X 1 XX 1 X 1 XX 1 X 1 X 1 XX 1 XX 1 XXXX X 11 XXXXXX 1 XX 1 XXX
    • 22. <ul><li>Increasing motivation during learning </li></ul><ul><li>* Rude tutor </li></ul><ul><li>* Feedback </li></ul>
    • 23. Example Dialog Tutor: Okay, let's begin. Suppose a runner is running in a straight line at constant speed, and the runner throws a pumpkin straight up. Where will the pumpkin land? Explain why. Student: The pumpkin will land behind the runner Tutor: Well , what can you say about the pumpkin's vertical velocity in relation to gravity? Student: They are equal Tutor: A lright , I am sure you can get this. The direction in which a force is being applied to the pumpkin is what? Student: Down Tutor: Excellent! Is vertical. Let's move on. After release the only force on the pumpkin is gravity, the horizontal velocity of the pumpkin is not affected. What can you say about the pumpkin's motion after its release? Student: It is constantly moving forward
    • 24. Example Dialog with RudeTutor Tutor: Okay, let's begin. Suppose a runner is running in a straight line at constant speed, and the runner throws a pumpkin straight up. Where will the pumpkin land? Explain why. Student: The pumpkin will land behind the runner Tutor: What a pathetically incorrect answer! So what can you say about the pumpkin's vertical velocity in relation to gravity? Student: They are equal Tutor: Ouch, you must have been daydreaming in your high school football physics course. I doubt you can get this next question but let me give it a try. The direction in which a force is being applied to the pumpkin is what? Student: Down Tutor: Let me hold my nose and grant you this one. It’s vertical. Let's move on…
    • 25. Content versus Progress Feedback <ul><li>Content Feedback </li></ul><ul><ul><ul><li>Qualitative information about the subject matter. </li></ul></ul></ul><ul><ul><ul><li>Local: Highlighting important words </li></ul></ul></ul><ul><ul><ul><li>Global: Summary of ideal answer </li></ul></ul></ul><ul><li>Progress Feedback </li></ul><ul><ul><ul><li>Quantitative assessment of the student’s advancement through the material being covered </li></ul></ul></ul><ul><ul><ul><li>How far the student has come and how much farther they have to go? </li></ul></ul></ul><ul><ul><ul><li>Local: Progress on individual expectations. </li></ul></ul></ul><ul><ul><ul><li>Global: Points scored in session. </li></ul></ul></ul>
    • 26. Screen shot with additions
    • 27. Feedback Experiment (Tanner Jackson dissertation) <ul><li>Pretest </li></ul><ul><ul><li>26 conceptual physics questions (counterbalanced) </li></ul></ul><ul><li>Interact with a version of AutoTutor </li></ul><ul><ul><li>All Feedback </li></ul></ul><ul><ul><li>Progress Only </li></ul></ul><ul><ul><li>Content Only </li></ul></ul><ul><ul><li>No Feedback </li></ul></ul><ul><li>Posttest </li></ul><ul><ul><li>26 conceptual physics questions (counterbalanced) </li></ul></ul>
    • 28. Adjusted Posttest Performance
    • 29. The Tutor Held my Interest Rated from 1=Strongly Disagree to 6=Strongly Agree
    • 30. Conclusions about Feedback Study <ul><li>Content Matters: Content feedback improved learning gains. </li></ul><ul><li>Progress feedback had little to no effect on either learning or motivation. </li></ul><ul><li>Negative relationship between liking and learning </li></ul><ul><ul><li>Content feedback provides an increase in learning gains, but a decrease in user satisfaction. </li></ul></ul>
    • 31. <ul><li>Emotions during Learning </li></ul>
    • 32. Emotions and Complex Learning <ul><li>Emotions (affective states) are inextricably bound to deep learning. </li></ul><ul><li>Negative emotions are: </li></ul><ul><ul><li>confusion  irritation  </li></ul></ul><ul><ul><li>frustration  anger  rage </li></ul></ul><ul><li>Positive emotions are: </li></ul><ul><ul><li>engagement  flow  delight  excitement  eureka </li></ul></ul>
    • 33. Theories of Emotions relevant to Complex Learning <ul><li>Goal Theory, with achievements and interruptions (Dweck, 2002; Mandler, 1976; Stein & Hernandez,2007) </li></ul><ul><li>Cognitive Disequilibrium Model and confusion (Graesser & Olde, 2003; Piaget, 1952) </li></ul><ul><li>Yerkes-Dodson law on performance, arousal, and task complexity </li></ul><ul><li>Flow Theory and consciousness (Csikszentmihaly, 1990) </li></ul><ul><li>Appraisal theory (Frijda, 1986; Laird, 2007; Scherer, 2006) </li></ul><ul><li>Valence-intensity model, core affective framework, & folklore (Barrett, 2006; Russell, 2003) </li></ul><ul><li>Academic risk theory (Meyer & Turner, 2006) </li></ul><ul><li>Lepper’s analysis that promotes challenge, control, confidence and curiosity </li></ul><ul><li>Kort, Reilly, Picard Model (2001) </li></ul><ul><li>Multiple zones of affect and cognition </li></ul>
    • 34. Emotions During Learning Boredom (23%) Surprise (4%) Frustration (16%) Flow (28%) Delight (4%) Confusion (25%)
    • 35.  
    • 36. Methods of Identifying Emotions <ul><li>Expert observation </li></ul><ul><li>Emote aloud protocols </li></ul><ul><li>After-action judgments by: </li></ul><ul><ul><li>Self (learner) </li></ul></ul><ul><ul><li>Peer </li></ul></ul><ul><ul><li>Expert judges </li></ul></ul><ul><ul><li>Teachers </li></ul></ul><ul><li>Sensing channels </li></ul><ul><ul><li>Dialogue history </li></ul></ul><ul><ul><li>Speech parameters </li></ul></ul><ul><ul><li>Facial actions </li></ul></ul><ul><ul><li>Posture </li></ul></ul>
    • 37. Inter-judge Reliability of Trained Judges Study Kappa Emotions Our research .49 Confusion, frustration… Litman & Forbes-Riley (2004) .40 Positive, Neutral Negative Ang, Dhillon, Krupski, Shriberg, & Stolcke (2002) .47 Frustration, Annoyance Shafran, Riley, & Mohri (2003) .32-.42 - Narayanan .45-.48 Negative Emotions
    • 38. Emotion Sensors and Channels Dialogue Face Posture Speech
    • 39. Classification Accuracy [Overall]
    • 40. Bayesian Diagnosticity
    • 41. Conclusions about Automated Emotion Classification <ul><li>Dialogue, face, and posture can classify as good as trained judges. </li></ul><ul><li>Fusion of channels typically involves redundancy. </li></ul><ul><ul><li>But superadditivity in the case of boredom </li></ul></ul><ul><li>Different emotions are expressed in different channels. </li></ul>
    • 42. How might AutoTutor be Responsive to the Learner’s Affect States <ul><li>If learner is frustrated , then AutoTutor gives a hint. </li></ul><ul><li>If bored , then some engaging razzle dazzle </li></ul><ul><li>If flow/absorbed , then lay low </li></ul><ul><li>If confused , then intelligently manage optimal confusion </li></ul>
    • 43. Emotion Sequencing (D’Mello, Taylor, & Graesser) FRUSTRATION CONFUSION BOREDOM FLOW
    • 44. <ul><li>Beyond </li></ul><ul><li>AutoTutor </li></ul>
    • 45. Memphis Agent Environments AutoTutor iSTART MetaTutor HURA Advisor Tutor Agent ARIES iDRIVE Good Job! student agent
    • 46. <ul><li>ARIES </li></ul><ul><li>Agent </li></ul><ul><li>Trialogs </li></ul>
    • 47.  
    • 48. ARIES A cquiring R esearch I nvestigative and E valuative S kills <ul><li>Scientific reasoning </li></ul><ul><li>Electronic textbook with periodic multiple choice questions </li></ul><ul><li>Diagnostic questions </li></ul><ul><li>Critique experiments and newspaper clips </li></ul><ul><li>Game context with aliens </li></ul><ul><li>AutotTutor trialogs with tutor and student agents </li></ul>
    • 49.  
    • 50.  
    • 51.  
    • 52. Our Next Challenge <ul><li>How do we design dialogs, trialogs, and quadralogs of agents in a science of learning that is sensitive to s ocial, m otivational, a ffective, and c ognitive (SMAC) mechanisms? </li></ul>
    • 53. <ul><li>? </li></ul>

    ×