2. Motivations and Key Points at a glance:
• Deciding what is more and what is less important is critical when developing
software, in particular to determine the contents of a development cycle, as well as
in any activity with limited time and resources
• However, different stakeholders (developers, users, operations, marketing…) may have
different perspectives and sometimes conflicting opinions on the relative priority of
requirements to be addressed
• Supporting decision-making techniques / tools abound but practical use is relatively rare
• … especially in small and distributed organizations, because of perceived costs and efforts w.r.t.
benefits
• The process we propose and its accompanying tool tackle these issues by:
• Minimizing the cost of set-up and execution of a decision-making session
• Allowing it to be easily repeated, with the same or different participants / criteria / items
• Distributing it in time and space (i.e., not requiring everybody to be at the same time in the
same place)
• Inserting gamification elements to make decision-making organizationally flexible, engaging
for the participants, and potentially itself source of further organizational knowledge
• Experimented in a few case studies so far, the very first being illustrated here
06 July 2017 Tool-Supported Requirement Prioritisation 2
7. Gamifying the process (2/2)
• Playfulness (how much one is “entertained” by the system), to be introduced in
various ways. We experimented with:
• Competition and Reward by means of points-ification (i.e. collection of points according to
the actions performed and their modality of execution, in turn leading to the introduction of
leaderboards, badges and so on)
• Time pressure is both the results of points-ification (the faster one plays, the more points one
collects) and by the bare psychological trick of showing a clock ticking
• Lightweight-ness of gameplay (i.e. the simplicity and speed at which players can perform their
actions), not only to improve playfulness but to support repeatability
• Easy repeatability, enabling multiple runs with the same or different stakeholders
(participants), the same or different sets of requirements, the same or different
sets of prioritization criteria, and so on
• Further, the introduction of some form of KPIs (e.g., points could be computed
according to various criteria and kept separated) may be exploited to let
competence and engagement emerge over time
06 July 2017 Tool-Supported Requirement Prioritisation 7
8. GRP (Gamified collaborative Requirements
Prioritisation): Experimental set-up
• The experimented process is based on the well known AHP (Analytic Hierarchy Process)
• Pairwise comparisons: decision-maker expresses which one of two items (requirements in our case) is
preferred according to a specific criterion (e.g. user value, development complexity, …)
• 9-value scale used to express preferences, ranging from “A is greatly preferred to B” to “indifferent” to
“B is greatly preferred to A”.
• Number of comparisons = ½ items2 * criteria
• But, a stopping criteria can be imposed, and supervisor may decide to stop anyway. Computations can use partial
data
• All opinion providers can express their preferences on all requirements. Negotiation is
automatically requested by the system when two or more players have strongly different
opinions (contrasting preferences exceed a threshold)
• Resolution currently implemented as negotiator expressing final opinion or ignoring and letting the
underlying computations use averages
• The tool provides a Web interface. Players can intervene from any location at their
leisure but within a time window defined by the supervisor
• Gamification thus included many playfulness aspects, but no points-ification (competition & rewards)
for the experiment reported here
06 July 2017 Tool-Supported Requirement Prioritisation 8
10. The experiment: data and characteristics
• Development team of R&D department
• Delta Informatica’s own group working on authoring tools for virtual reality-based
training (PRESTO tool suite)
• SME. Little or no user input on most novel aspects
• 16 requirements, grouped in 4 clusters
• 2 criteria for evaluation (effort and user value)
• 8 participants: engineers of various seniority levels and their coordinator
• All as opinion providers
• The coordinator had also the role of negotiator
• 24 hours for “playing” 4 games (one per cluster)
• Questionnaires and interviews after conclusion
06 July 2017 Tool-Supported Requirement Prioritisation 10
12. Results from the experiment
• GRP perceived as effective for reaching acceptable decisions in an efficient
manner
• Participants agreed to a high degree with the final outcome of the process for all
clusters of requirements
• This in spite of a high level of disagreement in pairwise preferences
• The comparisons of GRP outcomes before and after the negotiator’s intervention
showed that the negotiator plays an important role in managing conflicting
preferences, affecting results
• Reported advantages include the ability to track decisions
• Suggested potential improvements include mechanisms for “live
communication” during gameplay
• The measured time required to express preferences is quite short
06 July 2017 Tool-Supported Requirement Prioritisation 12
13. Current state
• Improved prototype and extended scope of process
• Points-ifcation fully supported
• Prioritization techniques alternative to AHP have been introduced, most importantly non-
pairwise comparisons (using GAs to compute agreements, measuring individual choice’s
distance from group’s consensus, …)
• Revised the entire process to give more importance to the supervisor and to introduce
planning tools, e.g. to select the most suitable algorithm / technique, to decide how many
games to play and with which participants, and so on
• Tried out in other experiments and validated against the project’s use
cases
• Results available as project’s reports
• Overall, approach always well received
06 July 2017 Tool-Supported Requirement Prioritisation 13