The document proposes a model to provide automated feedback to students on their software tests to improve quality. It aims to encourage students to reflect on their own code and write effective tests, rather than relying on others to find bugs. The model would analyze test results and coverage to identify untested functions and give targeted feedback. A future study is planned to evaluate how feedback impacts test quality over multiple assignment submissions. The goal is to help students independently problem solve through self-evaluation of their tests.
1. Generating Automated Feedback To
Improve Software Testing Quality
Prepared by -
Advisors -
Pratik Gundlupet Venkatesh
Dr. Kevin Buffardi
2. INTRODUCTION
Assisting students to implement meaningful software code and
to identify whether they have adequately tested. It is important
to provide feedback of the shortcomings in both solution
correctness and thoroughness in testing.
3. PROBLEM STATEMENT
Writing meaningful software tests and receiving feedback is
difficult for students when writing code. We describe a model
for scaffolding feedback to encourage students to reflect over
their code and write effective software tests to reveal bugs
instead of relying on external sources or by performing trial and
error techniques.
4. EXAMPLE● A student might have tested
thoroughly and covered all the
unit test cases but, fails in
instructor’s unit test(s).
○ Switching of arguments.
○ misunderstanding
requirements
● Implementation is correct but, no
thorough testing and branch
coverage done.
5. RESEARCH QUESTIONS
● How does the feedback influence the quality of the solution
code and unit tests?
● Can we show students better development habits as a
response to reflective feedback?
6. SIGNIFICANCE
The goal is to problem solve by evaluating the test cases they have
written rather than depending on someone else to identify the
bugs. The primary effect we will observe is how the quality of
student's testing changes in consecutive submissions to
Testoscope, an educational software testing tool, after receiving
the adaptive feedback that is meant to encourage reflection.
7. LITERATURE REVIEW
This research is based on article Reconsidering Automation
Feedback: A Test-Driven Approach by Dr. Kevin Buffardi and
Stephen H Edwards. It discusses on giving feedback to students on
their assignments on both solution correctness and thoroughness
of testing. Article encourage students to reflect over their code and
concentrate on writing effective software tests to reveal bugs
instead of relying on external sources.
10. CONTRIBUTION
● Identify those functions which are causing bugs, and how well the students
have tested the function(s).
● Modify the logic to interpret google-test output by parsing XML and
gcov output by Regular Expressions to extract function signatures.
● Given the signatures, determine function(s) in order to provide
immediate feedback for test results.
11. CONCLUSION
● We provided a mechanism for immediate feedback to
students based on their test results.
● Improved the effectiveness of the student software unit
tests instead of depending on external sources to reveal
bugs.
12. FUTURE WORK
● Categorize feedback based on solution code coverage.
● Doing a real world case study with multiple students solution
code on a programing assignment.
● Perform a repeated measures study on the feedback to
observe continuous improvement across multiple submissions
of an assignment.
13. REFERENCE
Reference article is Reconsidering Automated Feedback: A Test-
Driven Approach by Dr. Kevin Buffardi and Stephen H Edwards
published in the year 2015 at the proceedings of the 46th ACM
Technical Symposium on Computer Science Education.