Auto-Grading Parallel Programs
Building plugins for WebCAT
Team Lead: Max Grossman
Team Member: Maha Aziz, Anant Tibrewal, Anna Chi
● Background & Motivation
● Related Work
● What is WebCAT?
● Workflow
● Feature table
● Our Progress
● Future Plans
Overview
● Current grading method & Shorts
● A tool for testing and analyzing parallel programs automatically
○ Transparency of the grading process
○ Speed up and simplify the grading process
○ Focus on application performance
○ Possibility of parallel programming MOOC to be created
Background & Motivation
● Mooshak
○ Intended for hosting coding competitions online
○ Simple configuration, minimal requirements (apache, linux,
tcl)
○ Not very pretty UI
● Marmoset
○ Not very extensive Java support
Related Work
What is WebCAT?
What is WebCAT?
from May.11th till now
● Research about multiple autograding systems
Related Work
● Target on WebCat
WebCAT
● Install and set up
● Hackathon plug-in structure
● Feature table
Workflow
Description Priority (1-5, 1
being the highest)
Viability (1-5, 1
being the highest)
Workload (1-5, 1
being the highest)
Assignee
Code styling/checkstyle grading 2 1 2 Anant
Local grading of HJ-lib programs
running in a single thread for
correctness
1 1 1 Anant
Remote grading of HJ-lib
programs running on multiple
cores in a compute cluster
2 3 1 Anna
needs to commit student file to
svn before/after running the
autograder
1 1 5 Anna
Feature Table
Description Priority (1-5, 1
being the highest)
Viability (1-5, 1
being the highest)
Workload (1-5, 1
being the highest)
Assignee
give a way for student provided
tests to be graded with feedback
3/4 once we have
instructor-provided, this
is a 2
3
instructor provided tests with
limited feedback
1 1 1
Ranking of student submissions
based on performance, make it
visible to students to turn every
homework into a competition
(with names anonymized?)
4 4 2 Maha
Statistics on how many students
are passing a given test, to give
feedback on the relative
difficulty of different tests
(maybe part of the
leaderboard?)
3 4 2
Feature Table
Description Priority (1-5, 1
being the highest)
Viability (1-5, 1
being the highest)
Workload (1-5, 1
being the highest)
Assignee
Student peer reviews 5 5 1
Different types of static code
analysis tools
2 once we have
checkstyle, this should
be easier (3)
2
NetID authentication 1 3 ? Maha
Feature Table
Architecture Students
WebCAT
STIC Leaderboard
Homework
Submission
Performance Test
Results
Student View of
Leaderboard
POST Request for
New Submissions
SVN
● Anant
● Maha
● Anna
Our Progress
Architecture Students
WebCAT
STIC Leaderboard
Homework
Submission
Performance Test
Results
Student View of
Leaderboard
POST Request for
New Submissions
SVN
● Anant
○ Local grading of HJ-lib programs running in a single thread for
correctness
○ Built HJ program with JUnit tests to test WebCAT
■ correctness, performance tests
○ Incorporated the FindBugs static analysis tool to WebCAT
■ tool to detect bugs in code
Our Progress
Architecture Students
WebCAT
STIC Leaderboard
Homework
Submission
Performance Test
Results
Student View of
Leaderboard
POST Request for
New Submissions
SVN
● Maha
○ “Leaderboard”
○ created a database which contains all student submissions from
the svn
■ includes test names, execution times, core count, etc.
○ database is displayed to webpage for students to see a ranking
of how well their classmates are doing
■ all information on webpage is anonymous
Our Progress
Leaderboard
Architecture Students
WebCAT
STIC Leaderboard
Homework
Submission
Performance Test
Results
Student View of
Leaderboard
POST Request for
New Submissions
SVN
student-runs
Handled backup of students files using SVN
student ID
before
grading
num of
submission
after grading
student ‘s
code
Results
Remote grading on multiple cores in cluster
WebCAT
STIC
(cluster)
Build folder structure
Run tests
Student code &
Performance test
Send performance
results back
Through admin interface
Slurm personal configuration
DEMO
● Feature table
● Provide documentations for students and instructors
● EduHPC
Future Plans
● Current inconvenient grading process
● Research about autograders
● Good points and properties of WebCAT
● Feature Table
● Our progress
● Next steps
Summary

Autograder presentation

  • 1.
    Auto-Grading Parallel Programs Buildingplugins for WebCAT Team Lead: Max Grossman Team Member: Maha Aziz, Anant Tibrewal, Anna Chi
  • 2.
    ● Background &Motivation ● Related Work ● What is WebCAT? ● Workflow ● Feature table ● Our Progress ● Future Plans Overview
  • 3.
    ● Current gradingmethod & Shorts ● A tool for testing and analyzing parallel programs automatically ○ Transparency of the grading process ○ Speed up and simplify the grading process ○ Focus on application performance ○ Possibility of parallel programming MOOC to be created Background & Motivation
  • 4.
    ● Mooshak ○ Intendedfor hosting coding competitions online ○ Simple configuration, minimal requirements (apache, linux, tcl) ○ Not very pretty UI ● Marmoset ○ Not very extensive Java support Related Work
  • 5.
  • 6.
  • 7.
    from May.11th tillnow ● Research about multiple autograding systems Related Work ● Target on WebCat WebCAT ● Install and set up ● Hackathon plug-in structure ● Feature table Workflow
  • 8.
    Description Priority (1-5,1 being the highest) Viability (1-5, 1 being the highest) Workload (1-5, 1 being the highest) Assignee Code styling/checkstyle grading 2 1 2 Anant Local grading of HJ-lib programs running in a single thread for correctness 1 1 1 Anant Remote grading of HJ-lib programs running on multiple cores in a compute cluster 2 3 1 Anna needs to commit student file to svn before/after running the autograder 1 1 5 Anna Feature Table
  • 9.
    Description Priority (1-5,1 being the highest) Viability (1-5, 1 being the highest) Workload (1-5, 1 being the highest) Assignee give a way for student provided tests to be graded with feedback 3/4 once we have instructor-provided, this is a 2 3 instructor provided tests with limited feedback 1 1 1 Ranking of student submissions based on performance, make it visible to students to turn every homework into a competition (with names anonymized?) 4 4 2 Maha Statistics on how many students are passing a given test, to give feedback on the relative difficulty of different tests (maybe part of the leaderboard?) 3 4 2 Feature Table
  • 10.
    Description Priority (1-5,1 being the highest) Viability (1-5, 1 being the highest) Workload (1-5, 1 being the highest) Assignee Student peer reviews 5 5 1 Different types of static code analysis tools 2 once we have checkstyle, this should be easier (3) 2 NetID authentication 1 3 ? Maha Feature Table
  • 11.
    Architecture Students WebCAT STIC Leaderboard Homework Submission PerformanceTest Results Student View of Leaderboard POST Request for New Submissions SVN
  • 12.
    ● Anant ● Maha ●Anna Our Progress
  • 13.
    Architecture Students WebCAT STIC Leaderboard Homework Submission PerformanceTest Results Student View of Leaderboard POST Request for New Submissions SVN
  • 14.
    ● Anant ○ Localgrading of HJ-lib programs running in a single thread for correctness ○ Built HJ program with JUnit tests to test WebCAT ■ correctness, performance tests ○ Incorporated the FindBugs static analysis tool to WebCAT ■ tool to detect bugs in code Our Progress
  • 15.
    Architecture Students WebCAT STIC Leaderboard Homework Submission PerformanceTest Results Student View of Leaderboard POST Request for New Submissions SVN
  • 16.
    ● Maha ○ “Leaderboard” ○created a database which contains all student submissions from the svn ■ includes test names, execution times, core count, etc. ○ database is displayed to webpage for students to see a ranking of how well their classmates are doing ■ all information on webpage is anonymous Our Progress
  • 17.
  • 18.
    Architecture Students WebCAT STIC Leaderboard Homework Submission PerformanceTest Results Student View of Leaderboard POST Request for New Submissions SVN
  • 19.
    student-runs Handled backup ofstudents files using SVN student ID before grading num of submission after grading student ‘s code Results
  • 20.
    Remote grading onmultiple cores in cluster WebCAT STIC (cluster) Build folder structure Run tests Student code & Performance test Send performance results back
  • 21.
    Through admin interface Slurmpersonal configuration
  • 22.
  • 23.
    ● Feature table ●Provide documentations for students and instructors ● EduHPC Future Plans
  • 24.
    ● Current inconvenientgrading process ● Research about autograders ● Good points and properties of WebCAT ● Feature Table ● Our progress ● Next steps Summary