2. Programming Assessment
and
Data Collection
Petri Ihantola
Assistant Professor at Tampere University of Technology (2014 - ), D.Sc. (Tech) from Aalto University in 2011,
Software Engineer in Test at Google (2007-2009), Teaching various large-class programming courses at Aalto University,
former Helsinki University of Technology (2004 - 2014)
10. Traditionally, feedback has
focused on the end products
correctness, efficiency, style, design, ...
Ala-Mutka. A survey of automated assessment approaches for programming assignments. Computer Science Education, 15(2):
83-102, 2005.
11. May encourage ineffective
trial and error processes
image: https://www.flickr.com/photos/oliveira_comp/14261335089 cc (by-nc-sa)
12. May encourage ineffective
trial and error processes
tackled by limiting the number of
submissions/feedback, using time
penalties, making each exercise unique,
organizing contests, ...
Ihantola et al. 2010. Review of recent systems for automatic assessment of programming assignments. In Proceedings of the 10th
Koli Calling International Conference on Computing Education Research. 86-93.
13. Hey, wait a moment... isn't
this already already an
example of providing
feedback from the proces
14. So what makes it hard to
provide even better feedback
(from processes)?
15. So what makes it hard to
provide even better feedback
(from processes)?
16. Systems collect data
But when trying to get the big picture,
we still have to do many assumptions
image: unknown
19. Let's look at easier problems
first
Ihantola & Karavirta (2011). Two-Dimensional Parson’s Puzzles: The Concept, Tools, and First Observations. In: Journal of
Information Technology Education: Innovations in Practice 10, pp. 1–14.
20. Helminen, Ihantola, Karavirta, Malmi (2012). How Do Students Solve Parsons Programming Problems? – An Analysis of Interaction Traces. In Proceedings
of
the 8th International Computing Education Research Conference, pp. 119–126, Auckland, New Zealand.
Karavirta, Helminen, Ihantola (2012). A mobile learning application for parsons problems with automatic feedback. In: Koli Calling ’12: Proceedings of the
Looks like the student
got stuck here, lets
help.
21. Back to real life and real
programming environments
22. Back to real life and real
programming environments
23. How much information is
lost when storing snapshots
at different granularities?
submissions, save points, key-strokes
Vihavainen, Luukkainen & Ihantola. 2014. Analysis of source code snapshot granularity levels. In Proceedings of the 15th Annual
Conference on Information technology education (SIGITE '14). ACM
25. ● Introduction to Programming (MOOC)
● Spring 2014, University of Helsinki
● 1166 students
● 93231 submissions
● 1.3 million saves, runs and tests
● 37 million events (insert, remove, paste)
Novice programmers
26. ● 50% of students work on assignments that they
never submit - no information on the progress in
such (harder?) assignments
● Programmers with previous experience move
more straightforward (make less sidesteps)
● 6.3 snapshots / submission and
30 key events / snapshot
Some findings
27. So... collect the data while you
can. It cannot be regenerated,
e.g., interpolated.
29. Can we automatically detect
student’s perceived difficulty
as they are working
on programming tasks?
Petri Ihantola, Juha Sorva, and Arto Vihavainen. 2014. Automatically detectable indicators of programming
assignment difficulty. In Proceedings of the 15th Annual Conference on Information technology education (SIGITE
'14). ACM, New York, NY, USA, 33-38. (best paper award)
30. Can we understand how the
way of how students type
their code evolves over time?
Arto Vihavainen, Juha Helminen, and Petri Ihantola. 2014. How novices tackle their first lines of code in an IDE:
analysis of programming session traces. In Proceedings of the 14th Koli Calling International Conference on
Computing Education Research (Koli Calling '14). ACM, New York, NY, USA, 109-116.
32. The three main goals of
feedback are to help a learner
understand and learn about
1. the learning goals
2. own progress towards these goals
3. activities needed to make better process
Hattie & Timperley (2007). The Power of Feedback. Review of Educational Research, 77(1), 81-112.
35. However, we should not
ignore the vast amount of
previous research
e.g., Juha Helminen, Petri Ihantola, and Ville Karavirta. 2013. Recording and analyzing in-browser
programming sessions. In Proceedings of the 13th Koli Calling International Conference on
Computing Education Research (Koli Calling '13). 13-22.