Sebastiano Panichella and Marcela Ruiz: Requirements-Collector: Automating Requirements Specification from Elicitation Sessions and User Feedback. IEEE International Requirements Engineering Conference (RE’20).
Requirements-Collector: Automating Requirements Specification from Elicitation Sessions and User Feedback.
1. Zurich Universities of Applied Sciences and Arts
REQUIREMENTS-COLLECTOR:
AUTOMATING REQUIREMENTS SPECIFICATION FROM
ELICITATION SESSIONS AND USER FEEDBACK
SEBASTIANO PANICHELLA, MARCELA RUIZ
ZURICH UNIVERSITY OF APPLIED SCIENCES
RE POSTERS & DEMOS 2020
sebastiano.panichella@zhaw.ch
marcela.ruiz@zhaw.ch
7. Zurich Universities of Applied Sciences and Arts
https://github.com/lmruizcar/requirements_classifier
Deep Learning
Classifier
Component
Objective: Imitate the classification process
that has been done by using machine learning
Technique: Implement turns to describe when
a person speaks in a conversation.
In action: Use global vectors for word
representation to identify semantical similarity.
8. Zurich Universities of Applied Sciences and Arts
Deep Learning v.s. Machine Learning?
Deep Learning
& Machine Learning
Components
Objective: classify transcripts and user feedback
Techniques: via ML and DL.
In action: visualize collected requirements from live requirements
sessions and user feedback
Requirements Collector:
DL-component:
https://github.com/lmruizcar/Requirements-
Collector-DL-Component
ML-component:
https://github.com/spanichella/Requirement-
Collector-ML-Component
9. Zurich Universities of Applied Sciences and Arts
Deep Learning v.s. Machine Learning: Results
Deep Learning
&
Machine Learning
Components
Objective: classificy transcripts and user feedback
Techniques: via ML and DL.
In action: visualize collected requirements from live requirements
sessions and user feedback
Method: To evaluate Requirement-Collector accuracy, we experiment with:
- over ten ML (supervised) models (J48, PART,NaiveBayes, IBk, OneR, SMO, Logistic,
AdaBoostM1, Log-itBoost, DecisionStump, LinearRegression, RegressionByDiscretization)
- and a the DL method described in previous slides
Our results:
- For classifying transcripts is more appropriate to leverage DL strategies (we achieved
an average F-measure of 33% with DL and average 5% with ML models).
- When the task concerns the classification of user feedback from user reviews, the best
performing ML models (i.e., the SMO) achieves an F-Measure of 77%.
10. Support digital transformation by digitally
transforming software production
• Significant reduction of manual
tasks
• Software analysts get empowered
• Efficient communication
• Support any digital transformation
process
11. Current and (near) future work
• Exploring different machine learning
classification techniques to improve classification
accuracy
• Working on “gluing” the different parts of
identified user stories. Use of neuronal networks
as a possible technique
• Automatic selection of ontologies for supporting
contextual identification of roles.
• Validations: if you have recorded RE sessions,
please share!