Upcoming SlideShare
×

# Finding Robust Solutions to Requirements Models

393
-1

Published on

Presentation, Tsinghua University, 3/18/2010.

Published in: Technology
0 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
Your message goes here
• Be the first to comment

• Be the first to like this

Views
Total Views
393
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
0
0
Likes
0
Embeds 0
No embeds

No notes for slide

### Finding Robust Solutions to Requirements Models

1. 1. Gregory Gay<br />West Virginia University<br />greg@greggay.com<br />Finding Robust Solutions to Requirements Models<br />
2. 2. Consider a requirements model…<br />Contains:<br />Various goals of a project.<br />All methods for reaching those goals.<br />Risks that could compromise those goals.<br />Mitigations that remove risks.<br />A solution: balance between cost and attainment.<br />This is a non-linear optimization problem!<br />2<br />
3. 3. Understanding the Solution Space<br />Open and pressing issue. <br />Many SE problems are over-constrained – no right answer, so give partial solutions.<br />Robustness of solutions is key – many algorithms give brittle results.<br />Important to present insight into the neighborhood.<br />What happens if I do B instead of A? <br />3<br />
4. 4. The Naïve Approach<br />Naive approaches to understanding neighborhood:<br />Run N times and report (a) the solutions appearing in more than N/2 cases, or (b) results with a 95% confidence interval.<br />Both are flawed – they require multiple trials!<br />Neighborhood assessment must be fast.<br />Real-time if possible.<br />4<br />
5. 5. Research Goals<br />5<br />Two important concerns:<br />Is demonstrating solution robustness a time-consuming task?<br />Must solution quality be traded against solution robustness? <br />
6. 6. The Defect Detection and Prevention Model<br />Used at NASA JPL<br />Early-lifecycle requirements model<br />Light-weight ontology represents:<br />Requirements: project objectives, weighted by importance.<br />Risks: events that damage attainment of requirements.<br />Mitigations: precautions that remove risk, carry a cost value.<br />Mappings: Directed, weighted edges between requirements and risks and between risks and mitigations.<br />Part-of-relations: Provide structure between model components.<br />6<br />
7. 7. Light-weight != Trivial<br />7<br />
8. 8. Why Use DDP?<br />Three Reasons:<br />Demonstrably useful – cost savings often over \$100,000, numerous design improvements seen in DDP sessions, overall shift in risks in JPL projects.<br />Availability of real-world models now and in the future.<br />DDP is representative of other requirements tools. Set of influences, expressed in a hierarchy, relationships modeled through equations.<br />8<br />
9. 9. Using DDP<br />Input = Set of enabled mitigations.<br />Output = Two values: (Cost, Attainment)<br />Those values are normalized and combined into a single score:<br />9<br />
10. 10. Theory of KEYS<br />Theory: A minority of variables control the majority of the search space. <br />If so, then a search that (a) finds those keys and (b) explores their ranges will rapidly plateau to stable, optimal solutions.<br />This is not new: narrows, master-variables, back doors, and feature subset selection all work on the same theory. <br />10<br />
11. 11. KEYS Algorithm<br />Two components: greedy search and a Bayesian ranking method.<br />Each round, a greedy search:<br />Generate 100 configurations of mitigations 1…M.<br />Score them.<br />Sort top 10% of scores into “Best” grouping, bottom 90% into “Rest.”<br />Rank individual mitigations using BORE.<br />The top ranking mitigation is fixed for all subsequent rounds.<br />Stop when every mitigation has a value, return final cost and attainment values.<br />11<br />
12. 12. BORE Ranking Heuristic<br />We don’t have to actually search for the keys, just keep frequency counts for “best” and “rest” scores.<br />BORE based on Bayes’ theorem. Use those frequency counts to calculate: <br />To avoid low-frequency evidence, add support term:<br />12<br />
13. 13. KEYS vs KEYS2<br />13<br />KEYS fixes a single top-ranked mitigation each round.<br />KEYS2 incrementally sets more (1 in round 1, w in round 2… M in round M)<br />Slightly less tame, much faster. <br />
14. 14. Benchmarked Algorithms<br />KEYS much be benchmarked against standard SBSE techniques. <br />Simulated Annealing, MaxWalkSat, A* Search<br />Chosen techniques are discrete, sequential, unconstrained algorithms.<br />Constrained searches work towards a pre-determined number of solutions, unconstrained adjust to their goal space.<br />14<br />
15. 15. Simulated Annealing<br />Classic, yet common, approach.<br />Choose a random starting position. <br />Look at a “neighboring” configuration.<br />If it is better, go to it.<br />If not, move based on guidance from probability function (biased by the current temperature).<br />Over time, temperature lowers. Wild jumps stabilize to small wiggles.<br />15<br />
16. 16. MaxWalkSat<br />16<br />Hybridized local/random search. <br />Start with random configuration.<br />Either perform<br />Local Search: Move to a neighboring configuration with a better score. (70%)<br />Random Search: Change one random mitigation setting. (30%)<br />Keeps working towards a score threshold. Allotted a certain number of resets, which it will use if it fails to pass the threshold within a certain number of rounds.<br />
17. 17. A* Search<br />17<br />Best first path-finding heuristic. <br />Uses distance from origin (G) and estimated cost to goal (H), and moves to the neighbor that minimizes G+H.<br />Moves to new location and adds the previous location to a closed list to prevent backtracking.<br />Optimal search because it always underestimates H.<br />Stops after being stuck for 10 rounds. <br />
18. 18. Experiment 1: Costs and Attainments<br />18<br />Using real-world models 2,4,5 (1,3 are too small and were only used for debugging):<br />Run each algorithm 1000 times per model.<br />Removed outlier problems by generating a lot of data points.<br />Still a small enough number to collect results in a short time span.<br />Graph cost and attainment values.<br />Values towards bottom-right better.<br />
19. 19. Experiment 1 Results<br />19<br />
20. 20. Experiment 2: Runtimes<br />20<br />For each model:<br />Run each algorithm 100 times.<br />Record runtime using Unix “time” command.<br />Divide runtime/100 to get average. <br />
21. 21. Experiment 2 Results<br />21<br />
22. 22. Decision Ordering Diagrams<br />22<br />Design of KEYS2 automatically provides a way to explore the decision neighborhood. <br />Decision ordering diagrams – Visual format that ranks decisions from most to least important . <br />
23. 23. Decision Ordering Diagrams<br />23<br />These diagrams can be used to assess solution robustness in linear time by <br />Considering the variance in performance after applying X decisions.<br />Comparing the results of using the first X decisions to that of X-1 or X+1.<br />Useful under three conditions: (a) scores output are well-behaved , (b) variance is tamed, and (c) they are generated in a timely manner.<br />
24. 24. Conclusions<br />24<br />Optimization tools can study the space of requirements, risks, and mitigations. <br />Finding a balance between costs and attainment is hard!<br />Such solutions can be brittle, so we must comment on solution robustness.<br />Candidate solution: KEYS2<br />
25. 25. Conclusions (2)<br />25<br />Pre-experimental concerns:<br />An algorithm would need to trade solution quality for robustness (variance vs score).<br />Demonstrating solution robustness is time-consuming and requires multiple procedure calls.<br />KEYS2 defies both concerns. <br />Generates higher quality solutions than standard methods, and generates results that are tame and well-behaved (thus, we can generate decision ordering graphs to assess robustness).<br />Is faster than other techniques, and can generate decision ordering graphs in O(N2)<br />
26. 26. Conclusions (3)<br />26<br />Therefore, we recommend KEYS2 for the optimization of requirements models (and other SBSE problems) because it is fast, its results are well-behaved and tame, and it allows for exploration of the search space. <br />
27. 27. Questions?<br />27<br />I would like to thank Dr. Zhang for the invitation and all of you for attending!<br />Want to contact me later?<br />Email: greg@greggay.com<br />MSN Messenger: greg@4colorrebellion.com<br />Gtalk: momoku@gmail.com<br />More about me: http://www.greggay.com<br />