Motivating and Prioritizing Ongoing Student Feedback using Collaborative Filtering

293 views

Published on

Ken Goldberg, Artist and UC Berkeley Professor's presentation on "Cloud Robotics" as part of our Cognitive Systems Institute Speaker Series.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
293
On SlideShare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
2
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Motivating and Prioritizing Ongoing Student Feedback using Collaborative Filtering

  1. 1. M-CAFE V1.0: Motivating and Prioritizing Ongoing Student Feedback using Collaborative Filtering Mo Zhou, Alison Cliff, Sanjay Krishnan, Brandie Nonnecke, Camille Crittenden, Kanji Uchino, Ken Goldberg 1 Visit: M-CAFE.ORG
  2. 2. Motivation 2
  3. 3. Motivation 3
  4. 4. Existing Discussion Forums 4
  5. 5. Our Goal! 5 Weekly Check-in Student Confidentiality Collaborative Filtering Timely Feedback
  6. 6. Related Work Course Evaluation ● Braga, M. et al. 2014 ● Cohen, Peter A. 1981 ● Greenwald, A. G. and Gillmore, M.G. 1997 ● Marsh, H.W., and Roche, L.A. 1997 ● Stark, P. B. and Freishtat, R. 2014 6 Perceived Learning & Education ● Eom, S. B., Wen, H. J., & Ashill, N. 2006 ● Richardson, J. C. and Swan, K. 2003 ● Swan, K. 2001
  7. 7. Demographics Questions For MOOCs: Country, Gender, Age, Years of training, Reason for taking the course. For IEOR 170: Major, Year, Number of other related courses taken, Interest in the subject, Reason for taking the course. 7
  8. 8. Quantitative Analysis Topics (QAT) 1. How would you rate the course so far in terms of technical difficulty? 2. How would you rate the course so far in terms of usefulness to your career? 3. How would you rate your enthusiasm so far for this course? 4. How would you rate your performance so far in this course? 5. How would you rate the effectiveness of course assignments so far to help you develop your skills? 8
  9. 9. NLP Limitation in M-CAFE Selecting a set of insightful, novel, and relevant ideas is hard. Suggestions are often short and subject-specific. 9
  10. 10. Related Work Collaborative Filtering ● Goldberg, K. et al. 2001 ● Konstan, J.A. et al. 1997 ● Pearson, K. 1901 ● Sarwar, B. et al. 2001 ● Yang, X. et al.2014 10 Natural Language Processing (NLP) ● Adamopoulos, P. 2013 ● Pang, B and Lee, L. 2008 ● Reich, J. et al. 2014
  11. 11. Qualitative feedback with collaborative filtering (CF) ...... 11
  12. 12. Interface Figure 1: User Interface of M-CAFE 12
  13. 13. Interface cont. Figure 1: User Interface of M-CAFE cont. 13
  14. 14. 14 CS 169.2x: 6 weeks in Jun-Jul, 2014 ● Student Count: 348 ● QAT Rating Count: 741 ● Idea Count: 167 ● CF Rating Count: 4000 Participation
  15. 15. 15 Participation IEOR 170: 16 weeks in Jan - May, 2015 ● Student Count: 96 ● QAT Rating Count: 424 ● Idea Count: 270 ● CF Rating Count: 2483
  16. 16. Quantitative Analysis Topics Graph visualization of QAT rating changes over time. Figure 2: course difficulty rating over the first 10 weeks for IEOR 170. 16
  17. 17. Relationships between QAT rating changes 17
  18. 18. Qualitative feedback with collaborative filtering (CF) Highlight the most valuable ideas for instructors. The ranking metric. 18
  19. 19. Wilson Score: We took the mean grade g and then calculated the 95% confidence interval of g using standard error: g +/- 1.96*SE(g). We then rank the ideas by the lower bound g - 1.96*SE(g). 19 Given a set of rating to each idea, how should we rank them.
  20. 20. Since each participant rates k<<N ideas, how to choose which ideas to present. Uncertainty Sampling! For each idea i, Probability of exposure: P(i) ∝ SE(i) where SE(i) is the standard error of idea i 20
  21. 21. CF performance assessment No universal rule on how good an idea is. Assess from specific perspectives: Do CF selected ideas have a broad topic coverage? Is CF selecting ideas with better quality in general? Does CF idea ranking agree with Instructor ranking? 21
  22. 22. CF performance assessment 1. Chat forums. 2. Basics. 3. Javascript. 4. Additional time. 5. Additional exercises. 6. Security. 7. Update technology. Figure 3: The number of comments for each topic in the top 20 comments for CS 169.2x. 22
  23. 23. Quality scoring metric: 1 - Not readable. 2 - Readable but unrelated to the course. 3 - Present one idea about the course but it is not a suggestion. 4 - Present a suggestion with some reasoning. 5 - Present a suggestion with reasoning and propose a solution. CF performance assessment 23
  24. 24. A suggestion with a quality score of 5: Design patterns are hard to grasp without getting your hands dirty in a messy problem. I think using a quiz for that week instead of a challenging homework assignment was a mistake. I understand the concepts as abstract entities but would still have a hard time figuring out when and how to use them. I felt the same way about the Javascript week as well. A homework assignment doing JS and AJAX on the rotten potatoes example would have been ideal. A suggestion with a quality score of 1 is: Devise + Omniauth !!! 24 CF performance assessment
  25. 25. Additional Features Instructor weekly updates. 25
  26. 26. 26
  27. 27. Conclusion Developed a novel platform to generate timely feedback on course issue. Motivated student participation in courses. Highlighted valuable ideas using peer-to-peer collaborative filtering. 27
  28. 28. Future Work Explore how sorting and presenting ideas based on factors such as time or novelty will affect participation. Add topic tagging to organize suggested ideas. 28
  29. 29. Questions? Thank you! 29 For more information, visit: M-CAFE.ORG

×