West Virginian University
                                                               Modelling Intelligence Lab
      ...
West Virginian University
                                                                    Modelling Intelligence Lab
 ...
West Virginian University
                                                                  Modelling Intelligence Lab
   ...
West Virginian University
                                                                         Modelling Intelligence ...
West Virginian University
                          Modelling Intelligence Lab
                        http://unbox.org/wi...
West Virginian University
                                                                     Modelling Intelligence Lab
...
West Virginian University
                                                   Modelling Intelligence Lab
                  ...
West Virginian University
                                                             Modelling Intelligence Lab
        ...
West Virginian University
                                                       Modelling Intelligence Lab
              ...
West Virginian University
                          Modelling Intelligence Lab
                        http://unbox.org/wi...
West Virginian University
                                                                         Modelling Intelligence ...
West Virginian University
                                                                                      Modelling ...
West Virginian University
                                                                                      Modelling ...
West Virginian University
                                                                                      Modelling ...
West Virginian University
                                                                                      Modelling ...
West Virginian University
                                                                                      Modelling ...
West Virginian University
                                                                                      Modelling ...
West Virginian University
                                                                                      Modelling ...
West Virginian University
                                                                                      Modelling ...
West Virginian University
                                                                                      Modelling ...
West Virginian University
                                                                Modelling Intelligence Lab
     ...
West Virginian University
                                                                                      Modelling ...
West Virginian University


             Brittleness / variance results                            Modelling Intelligence ...
West Virginian University
                                                                    Modelling Intelligence Lab
 ...
West Virginian University
                                             Modelling Intelligence Lab
                        ...
Upcoming SlideShare
Loading in …5
×

Optimizing Requirements Decisions with KEYS

757 views

Published on

Optimizing Requirements Decisions with KEYS - PROMISE 2008

Published in: Technology, Travel
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
757
On SlideShare
0
From Embeds
0
Number of Embeds
121
Actions
Shares
0
Downloads
12
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Optimizing Requirements Decisions with KEYS

  1. 1. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys Optimizing Requirements Decisions With KEYS Omid Jalali1 Tim Menzies1 Martin Feather2 (with help from Greg Gay1) 1WVU 2JPL May 10, 2008 (for more info: tim@menzies.us) Promise Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not 2008 constitute or imply its endorsement by the United States Government.
  2. 2. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys Introduction  Prior PROMISE papers were data-intensive Six “sparks” – This paper is model- and algorithm- intensive proposed here: all based on existing  Search-based software engineering on-line material – AI design-as-search – Rich field for repeatable, refutable, improvable experimentation  Vast improvement in our ability to optimize JPL requirements models – 50,000 times faster – Can (almost) now do it in real time with the experts' dialogue • Modulo incremental model compilation  New algorithm: “KEYS” – Beats standard methods (simulated annealing) – Best state of the art methods (MaxFunWalk) – Feel free to roll-your-own algorithm • Luke, use the “keys” Promise 2008 2 May 1, 2008
  3. 3. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys “The Strangest Thing About Software”  Menzies ‘07, IEEE Computer (Jan) – Empirical results: • Many models contain “keys” • A small number of variables that set the rest – Theoretical results: • This empirical result is actually the expected case  So we can build very large models – And control them – Provided we can find and control the keys.  Keys are frequently used (by definition) – So you don’t need to hunt for them; they’ll find you – Find variables whose ranges select from very different outputs SPARK1: are keys in many models? Promise 2008 3 May 1, 2008
  4. 4. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys Find KEYS with BORE (best or rest sampling)  Input: – settings a,b,c,… to choices x,y,z… Supports partial – oracle(x=a,y=b,z=c,…)  score solutions – N =100 (say)  Output: keys (e.g.) {x=a, y=b, z=c,….} sorted by impact on score keys = {} while ( |keys| < |Choices| ) do era++ for i = 1 to N Inputs[i] = keys + random guesses for the other Choices scores[i] = oracle(Input[I]) scores = sort(scores); median = scores[n/2] print era, median , ( scores[n*3/4] - median ) divide inputs into “best” (10% top score) and “rest” ∀ (b,r) frequency of setting in (best, rest) rank[setting] = b2/(b+r) keys = keys ∪ rank.sort.first.setting Solutions not brittle done Promise 2008 4 May 1, 2008
  5. 5. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys About DDP (The case study we will use to assess KEYS)
  6. 6. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys Risks (damage goals) DDP: JPL requirements models goals  Mission concept meetings: – several multi-hour brainstorming sessions to design deep space missions – Staffed by 10-20 of NASA’s top experts – Limited time to discuss complex issues – Produces wide range of options: Mitigations Promise (reduce risks, 2008 cost $$$) 6 May 1, 2008
  7. 7. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys RE’02: Feather & Menzies • TAR2 = treatment learner best • weighted class; contrast set; association rule learner • Assumption of minimality • Handles very large dimensionality • JPL: found best in 99 Boolean attributes=1030 options • At JPL, Martin Feather, TAR2 vs…. • SA:, simulated annealing baseline • Results nearly same • TAR2: faster earlier mean convergence • SA: used 100% variables • TAR2: used 33% variables Runtime = 40 mins Promise 2008 7 May 1, 2008
  8. 8. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys 40 minutes: too slow  Extrapolating size of JPL requirements models: – Worse for 0(2n) runtimes  Victims of our success – The more we can automate • The more the users want – re-run all prior designs – re-run all variants of current design – re-run assigning with different maximum budgets – do all the above, while keeping up with a fast pace dialogue Promise 2008 8 May 1, 2008
  9. 9. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys From 40 minutes to 15 seconds (160 * faster)  Knowledge compilation (to “C”) – Pre-compute and cache common tasks – No more Visual Basic – Search engines and model • Can run in one process SPARK2: optimizing • Can communicate without incremental intermediary files knowledge compilation Promise x= x quot; min(x) 2008 max(x) quot; min(x) 9 May 1, 2008 !
  10. 10. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys Search algorithms (which we will use to comparatively assess KEYS)
  11. 11. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys A generic search algorithm  Input: – settings a,b,c,… to choices x,y,z… – oracle(x=a,y=b,z=c,…)  score  Output: best setting (output) while MaxTries-- do bad=0 reset /* to initial conditions, or random choice */ while MaxChanges-- do score = oracle(settings) If score > best then best = score , output=settings If score < notEnough then bad++ If bad > tooBad then goto BREAK if goal && (score-goal)/goal < ε then return settings If rand() < p then settings = guess /* random change, perhaps biased */ else settings = local search D deep for N next best settings fi update biases done BREAK: Promise done 2008 11 May 1, 2008
  12. 12. West Virginian University Modelling Intelligence Lab Some terminology: http://unbox.org/wisp/tags/keys State, Path, Random, Greedy  Input: (P) Path search, fill in settings one at a time (S) State search: fills in entire settings array – settings a,b,c,… to choices x,y,z… (R) Random search: p>=0 uses stochastic guessing, – oracle(x=a,y=b,z=c,…)  score multiple runs, maybe multiple answers (G) Greedy search MaxTries=D=tooBad=1  Output: best setting (output) early termination, don’t look ahead very deeply while MaxTries-- do bad=0 reset /* to initial conditions, or random choice */ while MaxChanges-- do score = oracle(settings) If score > best then best = score , output=settings If score < notEnough then bad++ If bad > tooBad then goto NEXT-TRY if goal && (score-goal)/goal < ε then return settings If rand() < p then settings = guess /* random change, perhaps biased */ else settings = local search D deep for N next best settings fi update biases done NEXT-TRY Promise done 2008 12 May 1, 2008
  13. 13. West Virginian University Modelling Intelligence Lab Some terminology: http://unbox.org/wisp/tags/keys State, Path, Random, Greedy  Input: (P) Path search, fill in settings one at a time (S) State search: fills in entire settings array – settings a,b,c,… to choices x,y,z… (R) Random search: p>=0 uses stochastic guessing, – oracle(x=a,y=b,z=c,…)  score multiple runs, maybe multiple answers (G) Greedy search MaxTries=D=tooBad=1  Output: best setting (output) early termination, don’t look ahead very deeply while MaxTries-- do bad=0 reset /* to initial conditions, or random choice */ while MaxChanges-- do score = oracle(settings) If score > best then best = score , output=settings If score < notEnough then bad++ If bad > tooBad then goto NEXT-TRY if goal && (score-goal)/goal < ε then return settings If rand() < p then settings = guess /* random change, perhaps biased */ else settings = local search D deep for N next best settings fi update biases done NEXT-TRY Promise done 2008 13 May 1, 2008
  14. 14. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys Simulated annealing (Kirkpatrick et al.’83)  Input: (P) Path search, fill in settings one at a time (S) State search: fills in entire settings array – settings a,b,c,… to choices x,y,z… (R) Random search: p>=0 uses stochastic guessing, – oracle(x=a,y=b,z=c,…)  score multiple runs, maybe multiple answers (G) Greedy search MaxTries=D=tooBad=1  Output: best setting (output) early termination, don’t look ahead very deeply while MaxTries-- do bad=0 reset /* to initial conditions, or random choice */ Simulated annealing (RS) while MaxChanges-- do score = oracle(settings) • MaxTries=1 (no retries) • P= 1 (I.e. no local search) If score > best then best = score , output=settings • No biasing If score < notEnough then bad++ If bad > tooBad then goto NEXT-TRY if goal && (score-goal)/goal < ε then return settings If rand() < p then settings = guess /* random change, perhaps biased */ else settings = local search D deep for N next best settings fi update biases done NEXT-TRY Promise done 2008 14 May 1, 2008
  15. 15. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys Astar (Hart et al. ‘68)  Input: (P) Path search, fill in settings one at a time (S) State search: fills in entire settings array – settings a,b,c,… to choices x,y,z… (R) Random search: p>=0 uses stochastic guessing, – oracle(x=a,y=b,z=c,…)  score multiple runs, maybe multiple answers (G) Greedy search MaxTries=D=tooBad=1  Output: best setting (output) early termination, don’t look ahead very deeply while MaxTries-- do bad=0 reset /* to initial conditions, or random choice */ Simulated annealing (RS) while MaxChanges-- do Astar (PS) score = oracle(settings) • MaxTries=1 (no retries) • P= -1 D=N=1 • P=1 (I.e. local search) If score > best then best = score , output=settings Scoring = g(x)+h(x) • No biasing If score < notEnough then bad++ • h(x) : a guess to one solutions’ value If bad > tooBad then goto NEXT-TRY • g(x) : is the cost to get here if goal && (score-goal)/goal < ε then return settings e.g. number of decisions made Tightly controlled bias If rand() < p • OPEN list= available options then settings = guess /* random change, perhaps biased */ • On selection, option moves from else settings = local search D deep for N next best settings OPEN to CLOSED, never to be fi used again update biases done NEXT-TRY Promise done 2008 15 May 1, 2008
  16. 16. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys MaxWalkSat (Kautz et.al ‘96)  Input: (P) Path search, fill in settings one at a time (S) State search: fills in entire settings array – settings a,b,c,… to choices x,y,z… (R) Random search: p>=0 uses stochastic guessing, – oracle(x=a,y=b,z=c,…)  score multiple runs, maybe multiple answers (G) Greedy search MaxTries=D=tooBad=1  Output: best setting (output) early termination, don’t look ahead very deeply while MaxTries-- do bad=0 reset /* to initial conditions, or random choice */ Simulated annealing (RS) while MaxChanges-- do Astar (PS) score = oracle(settings) • MaxTries=1 (no retries) • MaxFunWalk Scoring = g(x)+h(x) • P=1 (I.e. local (rS) search) If score > best then best = score , output=settings • •h(x) : a D=N=1to one solutions’ value P=0.5 guess • No biasing If score < notEnough then bad++ • •g(x) biasing cost to get here No : is the If bad > tooBad then goto NEXT-TRY e.g. number of decisions made • Score computed from weighted if goal && (score-goal)/goal < ε then return settings sum of satisfied CNF clauses If rand() < p then settings = guess /* random change, perhaps biased */ else settings = local search D deep for N next best settings fi update biases done NEXT-TRY Promise done 2008 16 May 1, 2008
  17. 17. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys MaxWalkFun (Gay, 2008)  Input: (P) Path search, fill in settings one at a time (S) State search: fills in entire settings array – settings a,b,c,… to choices x,y,z… (R) Random search: p>=0 uses stochastic guessing, – oracle(x=a,y=b,z=c,…)  score multiple runs, maybe multiple answers (G) Greedy search MaxTries=D=tooBad=1  Output: best setting (output) early termination, don’t look ahead very deeply while MaxTries-- do bad=0 reset /* to initial conditions, or random choice */ Simulated annealing (RS) while MaxChanges-- do Astar (PS) score = oracle(settings) • MaxTries=1 (no retries) • MaxWalkSat Scoring = g(x)+h(x) • P=1 (I.e. local(rS) search) If score > best then best = score , output=settings • •h(x) : a D=N=1to one solutions’ value P=0.5 guess • No biasing MaxFunWalk (rS) If score < notEnough then bad++ • •g(x) biasing cost to get here No : is the • Like MaxWalkSat,decisions made e.g. number of but score If bad > tooBad then goto NEXT-TRY • Score computed from weighted if goal && (score-goal)/goal < ε then return settings computed from JPL sum of satisfiedmodels requirements CNF clauses If rand() < p then settings = guess /* random change, perhaps biased */ else settings = local search D deep for N next best settings fi update biases done NEXT-TRY Promise done 2008 17 May 1, 2008
  18. 18. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys Tabu Search (Glover ‘89)  Input: (P) Path search, fill in settings one at a time (S) State search: fills in entire settings array – settings a,b,c,… to choices x,y,z… (R) Random search: p>=0 uses stochastic guessing, – oracle(x=a,y=b,z=c,…)  score multiple runs, maybe multiple answers (G) Greedy search MaxTries=D=tooBad=1  Output: best setting (output) early termination, don’t look ahead very deeply while MaxTries-- do bad=0 reset /* to initial conditions, or random choice */ Simulated annealing (RS) while MaxChanges-- do Astar (PS) score = oracle(settings) • MaxTries=1 (no retries) • MaxWalkSat Scoring = g(x)+h(x) • P=1 (I.e. local(rS) search) If score > best then best = score , output=settings • •h(x) : a D=N=1to one solutions’ value P=0.5 guess • No biasing MaxFunWalk (rS) If score < notEnough then bad++ • •g(x) biasing cost to get here No : is the • Tabu MaxWalkSat,decisions made Like search (PS) but score e.g. number of If bad > tooBad then goto NEXT-TRY • Score computed from weighted if goal && (score-goal)/goal < ε then return settings • Bias new from JPL clauses computed guesses away sum of satisfiedmodels requirements CNF from old ones If rand() < p Different to Astar: then settings = guess /* random change, perhaps biased */ • tabu list logs even else settings = local search D deep for N next best settings the unsuccessful fi explorations. update biases done NEXT-TRY Promise done 2008 18 May 1, 2008
  19. 19. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys Treatment learning (Menzies et al. ‘03)  Input: (P) Path search, fill in settings one at a time (S) State search: fills in entire settings array – settings a,b,c,… to choices x,y,z… (R) Random search: p>=0 uses stochastic guessing, – oracle(x=a,y=b,z=c,…)  score multiple runs, maybe multiple answers (G) Greedy search MaxTries=D=tooBad=1  Output: best setting (output) early termination, don’t look ahead very deeply while MaxTries-- do bad=0 reset /* to initial conditions, or random choice */ Simulated annealing (RS) while MaxChanges-- do Astar (PS) score = oracle(settings) • MaxTries=1 (no retries) • MaxWalkSat Scoring = g(x)+h(x) • P=1 (I.e. local(rS)search) If score > best then best = score , output=settings • •h(x) : a D=N=1to one solutions’ value P=0.5 guess • No biasing MaxFunWalk (rS) If score < notEnough then bad++ • •g(x) biasing cost to get here No : is the • Tabu MaxWalkSat,decisions made Like search (PS) but score e.g. number of If bad > tooBad then goto NEXT-TRY • Score computed from weighted if goal && (score-goal)/goal < ε then return settings •Treatment from JPL (PS) computed guesses away Bias new learning sum of satisfiedmodels requirements CNF clauses •from old ones P=D=N=1 If rand() < p then settings = guess /* random change, perhaps biased */ • MaxChanges much smaller else settings = local search D deep for N next best settings than |settings| fi • Bias = the lift heuristic • Returns the top N best settings update biases done NEXT-TRY Promise done 2008 19 May 1, 2008
  20. 20. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys KEYS (Jalali et al. 08)  Input: (P) Path search, fill in settings one at a time (S) State search: fills in entire settings array – settings a,b,c,… to choices x,y,z… (R) Random search: p>=0 uses stochastic guessing, – oracle(x=a,y=b,z=c,…)  score multiple runs, maybe multiple answers (G) Greedy search MaxTries=D=tooBad=1  Output: best setting (output) early termination, don’t look ahead very deeply while MaxTries-- do bad=0 reset /* to initial conditions, or random choice */ Simulated annealing (RS) while MaxChanges-- do Astar (PS) score = oracle(settings) • MaxTries=1 (no retries) • MaxWalkSat Scoring = g(x)+h(x) • P=1 (I.e. local(rS)search) If score > best then best = score , output=settings • •h(x) : a D=N=1to one solutions’ value P=0.5 guess • No biasing MaxFunWalk (rS) If score < notEnough then bad++ • •g(x) biasing cost to get here No : is the • Tabu MaxWalkSat,decisions made Like search (PS) but score e.g. number of If bad > tooBad then goto NEXT-TRY • Score computed from weighted if goal && (score-goal)/goal < ε then return settings •Treatment from JPL (PS) computed guesses away Bias new learning sum of satisfiedmodels requirements CNF clauses •from old ones D=N=1 KEYS (PRG) If rand() < p then settings = guess /* random change, perhaps biased */ • •P= -1; MaxTries=1 (no retries) MaxChanges much smaller else settings = local search D deep for N next best settings than |settings| •MaxChanges= |settings| fi • • Each the lift heuristic Bias = guess sets one • Returns thechoice bestun-do) one more top N (no settings update biases SPARK3: • Bias = BORE done meta-search: NEXT-TRY Promise mix &match the Done above 2008 20 May 1, 2008
  21. 21. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys Status in the literature  Simulated annealing – Standard search-based SE tool  Astar – Standard search used in gaming Simulated annealing (RS)  MaxWalkSat Astar (PS) – State of the art in the AI literature • MaxTries=1 (no retries) • MaxWalkSat Scoring = g(x)+h(x) • P=1 (I.e. local(rS)search)  MaxFunWalk P=0.5 • No biasing guess • •h(x) : a D=N=1to one solutions’ value MaxFunWalk (rS) • •g(x) biasing cost to get here No : is the – New • Tabu MaxWalkSat,decisions made Like search (PS) but score e.g. number of • Score computed from weighted  Treatment learning •Treatment from JPL (PS) computed guesses away Bias new learning sum of satisfiedmodels requirements CNF clauses – How we used to do it •from old ones D=N=1 KEYS (PRG) (RE’02: Menzies &Feather) • •P=0 -1; MaxTries=1smaller MaxChanges much (no retries) than |settings|  KEYS •MaxChanges= |settings| • • Each the lift heuristic Bias = guess sets one SPARK4: – New try other search • Returns thechoice bestun-do) one more top N (no settings methods: e.g. LDS, Beam, • Bias = BORE DFID,… Promise 2008 21 May 1, 2008
  22. 22. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys Results : 1000 runs model1.c : very small model2.c model2 Goal (max goals, ∑ $mitigations min cost) # goals reached Averages, seconds model3.c : very small model4.c 0.03 ? Goals/cost: (less is worse): • SA < MFW < astar < KEYS Runtimes (less is best) • astar < KEYS < MFW << SA model5.c 40 mins/ 0.048 secs = 50,000 times faster SPARK5: speed up via low-level code optimizations? Promise 2008 22 May 1, 2008
  23. 23. West Virginian University Brittleness / variance results Modelling Intelligence Lab http://unbox.org/wisp/tags/keys  One advantage of KEYS over ASTAR model4.c – Reports partial decisions – And the median/spread of those decisions – Usually, spread very very small  Shows how brittle is the proposed solution – Allows business managers to select partial, good-enough solutions model2.c model5.c SPARK6: for any prior PROMISE results, explore variance as well as median Promise behavior 2008 23 May 1, 2008
  24. 24. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys Conclusions  Prior PROMISE papers were data-intensive Six “sparks” – This paper is model- and algorithm- intensive proposed here: all based on existing  Search-based software engineering on-line material – AI design-as-search – Rich field for repeatable, refutable, improvable experimentation  Vast improvement in our ability to optimize JPL requirements models – 50,000 times faster – Can (almost) now do it in real time with the experts' dialogue • Modulo incremental model compilation – (note: yet to be tested in a live project setting)  New algorithm: “KEYS” – Beats standard methods (simulated annealing) – Best state of the art methods (MaxFunWalk) – Feel free to roll-your-own algorithm • Luke, use the “keys” Promise 2008 24 May 1, 2008
  25. 25. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys Questions? Comments? To reproduce this experiment 0. Under LINUX #!/bin/bash 1. Write this to a file 2. Run “bash file” mkdir ddp cd ddp svn co http://unbox.org/wisp/tags/ddpExperiment svn co http://unbox.org/wisp/tags/keys svn co http://unbox.org/wisp/tags/astar

×