Your SlideShare is downloading. ×
0
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Design Principles of Advanced Task Elicitation Systems
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Design Principles of Advanced Task Elicitation Systems

625

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
625
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
4
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Chair of Information Systems IV (ERIS) Institute for Enterprise Systems (InES)Design Principles of Advanced TaskElicitation Systems (*)Karlsruhe, November 30th 2012Prof. Dr. Alexander MädcheChair of Information Systems IV, Business School andInstitute for Enterprise Systems (InES), University of Mannheimhttp://eris.bwl.uni-mannheim.dehttp://ines.uni-mannheim.de (*) Joint work with: H. Meth, Y. Li, B. Mueller.
  • 2. Agenda 2 Agenda 1 Introduction 2 Related Work 3 Methodology 4 Exploring and Evaluating Design Principles 5 Discussion, Future Research & Summary
  • 3. Motivation 3 Failure rate of software development projects is still high. Driven by private life software usage the user expectations are growing. Understanding the requirements remains the major challenge:  35 % of requirements change throughout the software lifecycle (Jones, 2008)  45 % of delivered features are never used. (Standish Report, 2009)  82 % of projects cited incomplete and unstable requirements as the number one reason for failure (Taylor, 2000)
  • 4. State-of-the-Art in Software Development 4 Continuous stakeholder integration, cross-functional teams as well as incremental & artifact-driven development Analysis Analysis Phase Phase Analysis Engineering Phase Phase IS Human Requirements Software Development Computer Engineering Engineering Interaction
  • 5. Focus of this talk 5 Approximately 80% of the requirements are recorded in natural language (Mich et al. 2004; Neill and Laplante 2003):  Interview transcripts,  Workshop nemos,  Narrative scenarios In large-scale development, manual requirements elicitation is known to be time-consuming, error-prone, and monotonous. The study by Mich et al. (2004) on current elicitation practices explicates the need for advanced support with specific focus on automation.
  • 6. Agenda 6 Agenda 1 Introduction and Motivation 2 Related Work 3 Methodology 4 Exploring and Evaluating Design Principles 5 Discussion, Future Research & Summary
  • 7. Basic Definitions 7 Requirements elicitation is the process of discovering requirements through direct interaction with stakeholders or analysis of documents or other sources of information (Ratchev et al. 2003). A core activity in this process is the identification of relevant tasks to be supported by the software, referred to as task elicitation (also task analysis) (Lemaigre et al. 2008; Paterno 2002). Task elicitation aims at capturing the interaction between user and system on a detailed level, differentiating between actors, activity, and data (Tam et al. 1998).
  • 8. Related Work 8 Various attempts for advancing task elicitation by specialized task elicitation systems (TES) have been made, two major research streams: 1 Requirements Engineering • Identification of abstractions (Gacitua et al. 2011; Goldin and Berry 1997; Kof 2004; Rayson et al. 2000) • Identification and classification of requirements (Cleland-Huang Pattern: et al. 2007; Casamayor et al. 2010; Kiyavitskaya and Zannone 2008) Leverage • Create requirements and design model (Ambriola and Gervasi automation 2006) techniques and knowledge 2 Human Computer Interaction bases • Automate task elicitation with artifacts, e.g. U-TEL (Tam et al. 1998) or the model elicitation tool (Lemaigre et al. 2008)
  • 9. Related Work 9 Existing work has three major shortcomings:  Manual creation of knowledge bases  Lacking systematic empirical evaluation of productivity effects  Limited explanation of artifact’s conceptualization Research Question addressing this gap: Which design principles of task elicitation systems improve task elicitation productivity over manual task elicitation?
  • 10. Agenda 10 Agenda 1 Introduction and Motivation 2 Related Work 3 Methodology 4 Exploring and Evaluating Design Principles 5 Discussion, Future Research & Summary
  • 11. Methodology 11 Research question aims at the acquisition of theoretical design knowledge about task elicitation systems. Design Science Research as proposed by March & Smith (1995) is an applicable and appropriate approach to address the research question. Hevner et al (2004)
  • 12. Research Design 12  DSR project builds and evaluates an artifact to support task elicitation from natural language documents, guided by the Design Science framework suggested by Kuechler & Vaishnavi (2008): General Design Science Cycle Cycle1 Cycle2 Cycle3 Literature Review, Literature Review, Awareness of Problem Expert Interviews Expert Feedback Suggestion Analysis & ConceptualizationOperation and Artifact Concept Artifact Prototype Goal Artifact Final Development Version Version (First Version Knowledge (Click-Through) Implementation) Expert Evaluation Expert Evaluation Experiment Evaluation Focus: Usefulness Focus: Ease of use Evaluation Conclusion Design Principles (Meth et al. 2012a)
  • 13. Agenda 13 Agenda 1 Introduction and Motivation 2 Related Work 3 Methodology 4 Exploring and Evaluating Design Principles 5 Discussion, Future Research & Summary
  • 14. Justificatory Knowledge 14 The tool-supported task elicitation process can been seen as a series of advice-giving and advice-taking tasks (Bonaccio and Dalal 2006).  An increase of the advisor’s advice accuracy has been found to result in an increasing decision accuracy (of the advice-taker). Productivity improvement will only occur if the quality of approved requirements (the decision which has been taken) improves. The underlying knowledge base influences the results of the advice-giving process (Casamayor et al. 2010):  Leverage existing knowledge and enable continuous evolution of knowledge base.
  • 15. Conceptualization 15Mapping Design-Requirements (DR) to Design Principles(DP) to Design Features (DF): DR1. Increase quality DF1. Pre-Processing of approved & Elicitation requirements Algorithms DP1. Semi- Automatic Task Elicitation DR2. Decrease DF2. One-click Task Elicitation Effort Element Highlighting DR3. Increase quality of underlying DF3. Integrated DP2. Usage of Knowledge Base knowledge imported and retrieved knowledge DR4. Decrease DF4. Supervised knowledge creation Knowledge and maintenance Supplementation efforts
  • 16. Conceptual Architecture 16 Requirements Natural Engineer language Automaticdocuments Knowledge Category Text brick POS Tag Creation Manual Category Text brick POS Tag Elicitation Retrieved Knowledge Pre- Processing Algorithm Knowledge Engineer Automatic Category Text brick Elicitation Category Text brick Manual KnowledgeText POS Creation Imported Knowledgebrick Tag ElicitationText POS Algorithmbrick Tag Knowledge Base
  • 17. Artifact REMINER: Semi-Automatic Task Elicitation 17 MR1. Enable automatic task elicitation within natural language DF1. One-click documents Task Element Highlighting DP1. Semi- Automatic Task Elicitation MR2. Allow DF2. Natural manual Language adaptions of Processing automatically Capabilities elicited tasks MR3. Require minimal efforts to DF3. Knowledge build up task DP2. Usage Upload Capability knowledge of imported and retrieved knowledge MR4. Support DF4. Knowledge simple Retrieval and Re- supplementation Use of domain- specific knowledge Online available at: http://www.reminer.com/ (Meth et al. 2012a)
  • 18. Artifact REMINER: Imported and Retrieved Knowledge 18 MR1. Enable automatic task elicitation within natural language DF1. One-click Task documents Element Highlighting DP1. Semi- Automatic Task Elicitation MR2. Allow manual DF2. Natural adaptions of Language automatically Processing elicited tasks Capabilities MR3. Require minimal efforts to build up task DF3. Knowledge knowledge Upload Capability DP2. Usage of imported and retrieved knowledge DF4. Knowledge MR4. Support simple Retrieval and Re-Use supplementation of domain-specific knowledge Upload Retrieve & Re-Use
  • 19. Evaluation Methodology 19 Controlled within-subject experiment to rigorously test the effect of two design principles (DP1, DP2) on task elicitation productivity. Experimental task: task elicitation with interview transcripts  Task domain: Travel Management  Similar length, readability, and the distribution of task elements Sample size calculation:  Calculated with G*Power 3 (Faul et al., 2007), at least 35 participants are needed (f =0,25, 0.05 significance level) Participants: Student sample (Lab) Practitioner sample (Field) (N= 40) (N=5) Gender Female 8 2 Male 32 3(Meth et al. 2012b) Avg. age 25.4 (SD=2.07) 34.8 (SD=3.56)
  • 20. Evaluation Model 20 H1: In a fixed time period, TES configuration (2) results in higher recall than TES configuration (1) Task Elicitation Productivity H2: In a fixed time period, (in a fixed time period) TES configuration (3) results in higher Recall recall than TESTask Elicitation System (TES) H1,H2 configuration (2) Configuration H3 H3: In a fixed time period, (1,2,3) Precision TES configuration (1), (2) and (3) does NOT result in significantly different precision(Meth et al. 2012b)
  • 21. Experimental Procedure 21 Introduction Pre-task questionnaire Demographic information, task elicitation experience Use transcripts about “train reservation Training & Practice application” Use transcripts about “car sharing Experimental task application”; 3 TES configurations, counterbalanced 3 times Post-task questionnaire Task elicitation knowledge, motivation Overall: 70 minutes
  • 22. Data Analysis: Descriptive Results 22 Recall and Precision for Different TES Configurations (3) Semi-automatic with (2) Semi-automatic with (1) Manual imported and retrieved imported knowledge knowledge Mean SD Mean SD Mean SD Lab experiment (student participants, N=40) Recall 50.7% 12.0% 69.8% 9.8% 79.5% 8.0% Precision 71.0% 8.5% 72.0% 6.7% 73.2% 6.5% Field experiment (practitioner participants, N=5) Recall 37.6% 12.9% 68.6% 6.0% 77.8% 3.9% Precision 70.1% 14.5% 72.7% 3.5% 68.5% 5.3% Data analysis method  Internal reliability, normality and homogeneity of variance checked  RMANCOVA: “Task elicitation knowledge” and “motivation” are not covariates  Univariate RMANOVA for hypotheses testing
  • 23. Data Analysis: Hypotheses Testing Results 23 Results of RMANOVA for Recall and Precision DV Source DF MS F p η2 Cohen’s f TES Config. 2 0.861 129.76 < .001 .77 1.82 Recall Error 78 0.007 TES Config. 2 0.005 1.36 .263 .03 0.19 Precision H3: supported Error 78 0.004 Results of Pairwise Comparisons for Recall Mean 95% CI* Pair comparison p* difference Lower Upper TES config. (2) TES config. (1) 19.2% < .001 14.4% 23.9% H1: supported TES config. (3) TES configur. (2) 9.7% < .001 5.8% 13.6% H2: supported* Bonferroni corrections are applied for multiple comparisons External validity evaluation: the practitioner sample doesn’t demonstrate a different behavioral pattern on recall and precision. Huberty & Morris (1989)
  • 24. Agenda 24 Agenda 1 Introduction and Motivation 2 Related Work 3 Methodology 4 Exploring and Evaluating Design Principles 5 Discussion, Future Work & Summary
  • 25. Discussion 25 Design principles DP1 and DP2 impact recall:  Suggestion mechanism based on imported knowledge leads to 20% recall increase: Trust recommendations and increase recall through further manual elicitation of additional tasks in remaining time.  Dynamically retrieved knowledge leads to additional 10% recall increase: Continuous contribution of additional knowledge through ongoing manual elicitation. Limitations  Limited complexity of task domain and time-constraint evaluation approach.  Laboratory sessions were conducted with master IS students, only small-scale experiment was carried out with experts.
  • 26. Future Research 26 Presented work contributes to the design theory body of knowledge for task elicitation in the analysi phase. Interdisciplinary perspective is promising, research on task elicitation needs to be embedded: End-to-End Process Models & Development Management Tools Analysis Analysis Concepts Phase Phase Analysis Engineering Phase Phase http://www.usability-in-germany.de/
  • 27. Example: From Task Elicitation to Interaction Flows 27 (Meth et al. 2012a)
  • 28. Summary 28 • Design principles of an advanced task elicitation 1 system following a design science research approach have been presented. • Rigorous experimental evaluation has shown that semi- 2 automatic and knowledge-based elicitation has positive impact on elicitation productivity; • Contribution: The design theory body of knowledge for 3 task elicitation systems has been expanded. Software vendors can leverage results to provide advanced tool- based elicitation support
  • 29. Thank you for your attention! 29 Q&A Prof. Dr. Alexander Mädche +49 621 181 3606 maedche@es.uni-mannheim.de Chair of Information Systems IV, Business School and Institute for Enterprise Systems, University of Mannheim http://eris.bwl.uni-mannheim.de http://ines.uni-mannheim.de
  • 30. References 30 Neill, C. J., and Laplante, P. A. 2003. “Requirements Engineering: The State of the Practice,” IEEE Software (20:6), pp. 40-45. Mich, L., Franch, M., and Novi Inverardi, P. L. 2004. “Market research for requirements analysis using linguistic tools,” Requirements Engineering (9:1), pp. 40-56. Meth, H., Maedche, A., and Einoeder, M. 2012a. “Exploring design principles of task elicitation systems for unrestricted natural language documents,” Proceedings of the 4th ACM SIGCHI symposium on Engineering interactive computing systems - EICS ’12. New York, New York, USA: ACM Press, pp. 205 - 210. Meth, H., Li, Y., Maedche, A., and Mueller, B. 2012b. “Advancing Task Elicitation Systems - An Experimental Evaluation of Design Principles,” In ICIS 2012 Proceedings. Jones, C. 2008. Applied Software Measurement. McGraw Hill. Taylor, A. 2000. “IT projects: sink or swim.” The Computer Bulletin, 42 (1): 24-26. Standish Group Report 2009, http://luuduong.com/blog/archive/2009/03/04/applying-the- quot8020-rulequot-with-the-standish-groups-software-usage.aspx Bonaccio, S. and Dalal, R.S. (2006) “Advice taking and decision-making: An integrative literature review, and implications for the organizational sciences,” Organizational Behavior and Human Decision Processes (101: 2), pp. 127-151. Hevner, A. R., March, S. T., Park, J., and Ram, S. (2004) “Design Science in Information Systems Research,” MIS Quarterly (28:1), pp. 75-105.
  • 31. References (cont’d) 31 March, S. T., and Smith, G. F. 1995. “Design and natural science research on information technology,” Decision Support Systems (15:4), pp. 251–266. Lemaigre, C., García, J. G., and Vanderdonckt, J. (2008) “Interface Model Elicitation from Textual Scenarios,” in Proceedings of the Human-Computer Interaction Symposium, 272, pp. 53-66. Mich, L., Franch, M., and Novi Inverardi, P. L. (2004) “Market research for requirements analysis using linguistic tools,” Requirements Engineering (9:1), pp. 40-56. Kuechler, B., and Vaishnavi, V. (2008) “On theory development in design science research: anatomy of a research project,” European Journal of Information Systems (17:5), pp. 489–504. Ratchev, S. M., Urwin, E., Muller, D., Pawar, K. S., and Moulek, I. (2003) “Knowledge based requirement engineering for one-of-a-kind complex systems,” Knowledge Based Systems (16:1), pp. 1-5. Paterno, F. (2002) “Task Models in Interactive Software Systems,” in Handbook of Software Engineering and Knowledge Engineering Vol 1 Fundamentals, S. K. Chang (ed.), World Scientific, pp. 1-19. Tam, R. C.-man, Maulsby, D., and Puerta, A. R. (1998) “U-TEL: A Tool for Eliciting User Task Models from Domain Experts,” in Proceedings of the 3rd international conference on Intelligent user interfaces, pp. 77-80. Faul, F., Erdfelder, E., Lang, A.-G. and Buchner, A. (2007) “G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences.,” Behavior research methods 39(2), pp. 175-91.
  • 32. References (cont’d) 32 Gacitua, R., Sawyer, P., and Gervasi, V. (2011) “Relevance-based abstraction identification: technique and evaluation,” Requirements Engineering (16:3), pp. 251-265. Goldin, L., and Berry, D. M. (1997) “AbstFinder, A Prototype Natural Language Text Abstraction Finder for Use in Requirements Elicitation,” Automated Software Engineering (4:4), pp. 375-412. Kof, L. (2004) “Natural Language Processing for Requirements Engineering: Applicability to Large Requirements Documents,” in Proceedings of the 19th International Conference on Automated Software Engineering. Rayson, P., Garside, R., and Sawyer, P. (2000) “Assisting requirements engineering with semantic document analysis,” in Proceedings of the RIAO, pp. 1363-1371. Cleland-Huang, J., Settimi, R., Zou, X., and Solc, P. (2007) “Automated classification of non- functional requirements,” Requirements Engineering (12:2), pp. 103-120. Casamayor, A., Godoy, D., and Campo, M. (2010) “Identification of non-functional requirements in textual specifications: A semi-supervised learning approach,” Information and Software Technology (52:4), pp. 436-445. Kiyavitskaya, N., and Zannone, N. (2008) “Requirements model generation to support requirements elicitation: the Secure Tropos experience,” Automated Software Engineering (15:2), pp. 149-173. Ambriola, V., and Gervasi, V. (2006) “On the Systematic Analysis of Natural Language Requirements with CIRCE,” Automated Software Engineering (13:1), pp. 107-167. Huberty, C. J. and Morris, J. D. (1989) “Multivariate analysis versus multiple univariate analyses.,” Psychological Bulletin 105(2), pp. 302-308.

×