SlideShare a Scribd company logo
West Virginian University
                                                               Modelling Intelligence Lab
                                                             http://unbox.org/wisp/tags/keys




  Optimizing Requirements
   Decisions With KEYS

      Omid Jalali1 Tim Menzies1 Martin Feather2
              (with help from Greg Gay1)

                     1WVU     2JPL



                    May 10, 2008
             (for more info: tim@menzies.us)


Promise                     Reference herein to any specific commercial product, process, or service
                                    by trade name, trademark, manufacturer, or otherwise, does not
2008                           constitute or imply its endorsement by the United States Government.
West Virginian University
                                                                    Modelling Intelligence Lab
                                                                  http://unbox.org/wisp/tags/keys




                                 Introduction
 Prior PROMISE papers were data-intensive
                                                                      Six “sparks”
    – This paper is model- and algorithm- intensive                  proposed here:
                                                                 all based on existing
 Search-based software engineering                                 on-line material

    – AI design-as-search
    – Rich field for repeatable, refutable, improvable experimentation

 Vast improvement in our ability to optimize JPL requirements models
    – 50,000 times faster
    – Can (almost) now do it in real time with the experts' dialogue
        • Modulo incremental model compilation

 New algorithm: “KEYS”
    – Beats standard methods (simulated annealing)
    – Best state of the art methods (MaxFunWalk)
    – Feel free to roll-your-own algorithm
        • Luke, use the “keys”


       Promise
       2008                                      2                                               May 1, 2008
West Virginian University
                                                                  Modelling Intelligence Lab
                                                                http://unbox.org/wisp/tags/keys




          “The Strangest Thing About Software”
 Menzies ‘07, IEEE Computer (Jan)
    – Empirical results:
        • Many models contain “keys”
        • A small number of variables that set the rest
    – Theoretical results:
        • This empirical result is actually the expected case

 So we can build very large models
    – And control them
    – Provided we can find and control the keys.

 Keys are frequently used (by definition)
    – So you don’t need to hunt for them; they’ll find you
    – Find variables whose ranges select from very different outputs
                                                                              SPARK1: are
                                                                              keys in many
                                                                                models?
       Promise
       2008                                      3                                             May 1, 2008
West Virginian University
                                                                         Modelling Intelligence Lab
                                                                       http://unbox.org/wisp/tags/keys




         Find KEYS with BORE (best or rest sampling)
   Input:
     – settings a,b,c,… to choices x,y,z…
                                                                                  Supports partial
     – oracle(x=a,y=b,z=c,…)  score                                                 solutions
     – N =100 (say)
   Output: keys (e.g.) {x=a, y=b, z=c,….} sorted by impact on score
     keys = {}
     while ( |keys| < |Choices| ) do
         era++
         for i = 1 to N
             Inputs[i] = keys + random guesses for the other Choices
             scores[i] = oracle(Input[I])

             scores = sort(scores); median = scores[n/2]
             print era, median , ( scores[n*3/4] - median )

             divide inputs into “best” (10% top score) and “rest”
             ∀ (b,r) frequency of setting in (best, rest)
                rank[setting] = b2/(b+r)

        keys = keys ∪ rank.sort.first.setting                                       Solutions not
                                                                                       brittle
     done
               Promise
               2008                                        4                                          May 1, 2008
West Virginian University
                          Modelling Intelligence Lab
                        http://unbox.org/wisp/tags/keys




         About DDP

(The case study we will use to
       assess KEYS)
West Virginian University
                                                                     Modelling Intelligence Lab
                                                                   http://unbox.org/wisp/tags/keys


           Risks
          (damage
            goals)              DDP: JPL requirements models
goals                              Mission concept meetings:
                                      – several multi-hour brainstorming sessions to design
                                        deep space missions
                                      – Staffed by 10-20 of NASA’s top experts
                                      – Limited time to discuss complex issues
                                      – Produces wide range of options:




                       Mitigations
        Promise      (reduce risks,
        2008            cost $$$)
                                              6                                                   May 1, 2008
West Virginian University
                                                   Modelling Intelligence Lab
                                                 http://unbox.org/wisp/tags/keys



          RE’02: Feather & Menzies
                           • TAR2 = treatment learner
 best                            • weighted class; contrast set;
                                   association rule learner
                                 • Assumption of minimality
                                      • Handles very large dimensionality
                                      • JPL: found best in 99 Boolean
                                        attributes=1030 options
                           • At JPL, Martin Feather, TAR2 vs….
                                 • SA:, simulated annealing
                baseline              • Results nearly same
                                 • TAR2: faster earlier mean convergence
                                 • SA: used 100% variables
                                      • TAR2: used 33% variables




                                                                          Runtime
                                                                         = 40 mins



Promise
2008                       7                                                    May 1, 2008
West Virginian University
                                                             Modelling Intelligence Lab
                                                           http://unbox.org/wisp/tags/keys




                    40 minutes: too slow
 Extrapolating size of
  JPL requirements models:
    – Worse for 0(2n) runtimes

 Victims of our success
    – The more we can automate
       • The more the users want
    – re-run all prior designs
    – re-run all variants of current design
    – re-run assigning with different maximum budgets
    – do all the above, while keeping up with a fast pace dialogue


       Promise
       2008                              8                                                May 1, 2008
West Virginian University
                                                       Modelling Intelligence Lab
                                                     http://unbox.org/wisp/tags/keys




     From 40 minutes to 15 seconds (160 * faster)
 Knowledge compilation (to “C”)
    – Pre-compute and cache
      common tasks
    – No more Visual Basic
    – Search engines and model
       • Can run in one process                                          SPARK2:
                                                                        optimizing
       • Can communicate without                                       incremental
         intermediary files                                            knowledge
                                                                       compilation




       Promise                x=
                                     x quot; min(x)
       2008                        max(x) quot; min(x)
                                                 9                                  May 1, 2008


                      !
West Virginian University
                          Modelling Intelligence Lab
                        http://unbox.org/wisp/tags/keys




    Search algorithms

   (which we will use to
comparatively assess KEYS)
West Virginian University
                                                                         Modelling Intelligence Lab
                                                                       http://unbox.org/wisp/tags/keys




                      A generic search algorithm
 Input:
    – settings a,b,c,… to choices x,y,z…
    – oracle(x=a,y=b,z=c,…)  score
 Output: best setting (output)
while MaxTries-- do
    bad=0
    reset /* to initial conditions, or random choice */
    while MaxChanges-- do
       score = oracle(settings)

        If score > best then best = score , output=settings
        If score < notEnough             then bad++
        If bad > tooBad                  then goto BREAK
        if goal && (score-goal)/goal < ε then return settings

        If   rand() < p
        then settings = guess /* random change, perhaps biased */
        else settings = local search D deep for N next best settings
        fi

      update biases
   done
   BREAK: Promise
done
             2008                                         11                                          May 1, 2008
West Virginian University
                                                                                      Modelling Intelligence Lab

                                Some terminology:                                   http://unbox.org/wisp/tags/keys




                           State, Path, Random, Greedy
 Input:                                                       (P) Path search, fill in settings one at a time
                                                               (S) State search: fills in entire settings array
    – settings a,b,c,… to choices x,y,z…                       (R) Random search: p>=0 uses stochastic guessing,
    – oracle(x=a,y=b,z=c,…)  score                                multiple runs, maybe multiple answers
                                                               (G) Greedy search MaxTries=D=tooBad=1
 Output: best setting (output)                                    early termination, don’t look ahead very deeply
while MaxTries-- do
    bad=0
    reset /* to initial conditions, or random choice */
    while MaxChanges-- do
       score = oracle(settings)

        If score > best then best = score , output=settings
        If score < notEnough             then bad++
        If bad > tooBad                  then goto NEXT-TRY
        if goal && (score-goal)/goal < ε then return settings

        If   rand() < p
        then settings = guess /* random change, perhaps biased */
        else settings = local search D deep for N next best settings
        fi

      update biases
   done
   NEXT-TRY
          Promise
done
             2008                                         12                                                       May 1, 2008
West Virginian University
                                                                                      Modelling Intelligence Lab

                                Some terminology:                                   http://unbox.org/wisp/tags/keys




                           State, Path, Random, Greedy
 Input:                                                       (P) Path search, fill in settings one at a time
                                                               (S) State search: fills in entire settings array
    – settings a,b,c,… to choices x,y,z…                       (R) Random search: p>=0 uses stochastic guessing,
    – oracle(x=a,y=b,z=c,…)  score                                multiple runs, maybe multiple answers
                                                               (G) Greedy search MaxTries=D=tooBad=1
 Output: best setting (output)                                    early termination, don’t look ahead very deeply
while MaxTries-- do
    bad=0
    reset /* to initial conditions, or random choice */
    while MaxChanges-- do
       score = oracle(settings)

        If score > best then best = score , output=settings
        If score < notEnough             then bad++
        If bad > tooBad                  then goto NEXT-TRY
        if goal && (score-goal)/goal < ε then return settings

        If   rand() < p
        then settings = guess /* random change, perhaps biased */
        else settings = local search D deep for N next best settings
        fi

      update biases
   done
   NEXT-TRY
          Promise
done
             2008                                         13                                                       May 1, 2008
West Virginian University
                                                                                      Modelling Intelligence Lab
                                                                                    http://unbox.org/wisp/tags/keys



        Simulated annealing (Kirkpatrick et al.’83)
 Input:                                                       (P) Path search, fill in settings one at a time
                                                               (S) State search: fills in entire settings array
    – settings a,b,c,… to choices x,y,z…                       (R) Random search: p>=0 uses stochastic guessing,
    – oracle(x=a,y=b,z=c,…)  score                                multiple runs, maybe multiple answers
                                                               (G) Greedy search MaxTries=D=tooBad=1
 Output: best setting (output)                                    early termination, don’t look ahead very deeply
while MaxTries-- do
    bad=0
    reset /* to initial conditions, or random choice */                    Simulated annealing (RS)
    while MaxChanges-- do
       score = oracle(settings)                                            • MaxTries=1 (no retries)
                                                                           • P= 1 (I.e. no local search)
        If score > best then best = score , output=settings                • No biasing
        If score < notEnough             then bad++
        If bad > tooBad                  then goto NEXT-TRY
        if goal && (score-goal)/goal < ε then return settings

        If   rand() < p
        then settings = guess /* random change, perhaps biased */
        else settings = local search D deep for N next best settings
        fi

      update biases
   done
   NEXT-TRY
          Promise
done
             2008                                         14                                                       May 1, 2008
West Virginian University
                                                                                      Modelling Intelligence Lab
                                                                                    http://unbox.org/wisp/tags/keys



                               Astar (Hart et al. ‘68)
 Input:                                                       (P) Path search, fill in settings one at a time
                                                               (S) State search: fills in entire settings array
    – settings a,b,c,… to choices x,y,z…                       (R) Random search: p>=0 uses stochastic guessing,
    – oracle(x=a,y=b,z=c,…)  score                                multiple runs, maybe multiple answers
                                                               (G) Greedy search MaxTries=D=tooBad=1
 Output: best setting (output)                                    early termination, don’t look ahead very deeply
while MaxTries-- do
    bad=0
    reset /* to initial conditions, or random choice */                    Simulated annealing (RS)
    while MaxChanges-- do                                                   Astar (PS)
       score = oracle(settings)                                            • MaxTries=1 (no retries)
                                                                            • P= -1 D=N=1
                                                                           • P=1 (I.e. local search)
        If score > best then best = score , output=settings                 Scoring = g(x)+h(x)
                                                                           • No biasing
        If score < notEnough             then bad++                         • h(x) : a guess to one solutions’ value
        If bad > tooBad                  then goto NEXT-TRY                 • g(x) : is the cost to get here
        if goal && (score-goal)/goal < ε then return settings                        e.g. number of decisions made
                                                                            Tightly controlled bias
        If   rand() < p                                                     • OPEN list= available options
        then settings = guess /* random change, perhaps biased */           • On selection, option moves from
        else settings = local search D deep for N next best settings          OPEN to CLOSED, never to be
        fi
                                                                              used again
      update biases
   done
   NEXT-TRY
          Promise
done
             2008                                         15                                                       May 1, 2008
West Virginian University
                                                                                      Modelling Intelligence Lab
                                                                                    http://unbox.org/wisp/tags/keys



                    MaxWalkSat (Kautz et.al ‘96)
 Input:                                                       (P) Path search, fill in settings one at a time
                                                               (S) State search: fills in entire settings array
    – settings a,b,c,… to choices x,y,z…                       (R) Random search: p>=0 uses stochastic guessing,
    – oracle(x=a,y=b,z=c,…)  score                                multiple runs, maybe multiple answers
                                                               (G) Greedy search MaxTries=D=tooBad=1
 Output: best setting (output)                                    early termination, don’t look ahead very deeply
while MaxTries-- do
    bad=0
    reset /* to initial conditions, or random choice */                    Simulated annealing (RS)
    while MaxChanges-- do                                                   Astar (PS)
       score = oracle(settings)                                            • MaxTries=1 (no retries)
                                                                            • MaxFunWalk
                                                                               Scoring = g(x)+h(x)
                                                                           • P=1 (I.e. local (rS)
                                                                                              search)
        If score > best then best = score , output=settings                 • •h(x) : a D=N=1to one solutions’ value
                                                                                P=0.5   guess
                                                                           • No biasing
        If score < notEnough             then bad++                         • •g(x) biasing cost to get here
                                                                                No : is the
        If bad > tooBad                  then goto NEXT-TRY                           e.g. number of decisions made
                                                                              • Score computed from weighted
        if goal && (score-goal)/goal < ε then return settings                   sum of satisfied CNF clauses
        If   rand() < p
        then settings = guess /* random change, perhaps biased */
        else settings = local search D deep for N next best settings
        fi

      update biases
   done
   NEXT-TRY
          Promise
done
             2008                                         16                                                       May 1, 2008
West Virginian University
                                                                                      Modelling Intelligence Lab
                                                                                    http://unbox.org/wisp/tags/keys



                        MaxWalkFun (Gay, 2008)
 Input:                                                       (P) Path search, fill in settings one at a time
                                                               (S) State search: fills in entire settings array
    – settings a,b,c,… to choices x,y,z…                       (R) Random search: p>=0 uses stochastic guessing,
    – oracle(x=a,y=b,z=c,…)  score                                multiple runs, maybe multiple answers
                                                               (G) Greedy search MaxTries=D=tooBad=1
 Output: best setting (output)                                    early termination, don’t look ahead very deeply
while MaxTries-- do
    bad=0
    reset /* to initial conditions, or random choice */                    Simulated annealing (RS)
    while MaxChanges-- do                                                   Astar (PS)
       score = oracle(settings)                                            • MaxTries=1 (no retries)
                                                                            • MaxWalkSat
                                                                               Scoring = g(x)+h(x)
                                                                           • P=1 (I.e. local(rS)
                                                                                              search)
        If score > best then best = score , output=settings                 • •h(x) : a D=N=1to one solutions’ value
                                                                                P=0.5   guess
                                                                           • No biasing
                                                                                MaxFunWalk (rS)
        If score < notEnough             then bad++                         • •g(x) biasing cost to get here
                                                                                No : is the
                                                                                • Like MaxWalkSat,decisions made
                                                                                      e.g. number of but score
        If bad > tooBad                  then goto NEXT-TRY                   • Score computed from weighted
        if goal && (score-goal)/goal < ε then return settings                     computed from JPL
                                                                                sum of satisfiedmodels
                                                                                  requirements CNF clauses
        If   rand() < p
        then settings = guess /* random change, perhaps biased */
        else settings = local search D deep for N next best settings
        fi

      update biases
   done
   NEXT-TRY
          Promise
done
             2008                                         17                                                       May 1, 2008
West Virginian University
                                                                                      Modelling Intelligence Lab
                                                                                    http://unbox.org/wisp/tags/keys



                         Tabu Search (Glover ‘89)
 Input:                                                       (P) Path search, fill in settings one at a time
                                                               (S) State search: fills in entire settings array
    – settings a,b,c,… to choices x,y,z…                       (R) Random search: p>=0 uses stochastic guessing,
    – oracle(x=a,y=b,z=c,…)  score                                multiple runs, maybe multiple answers
                                                               (G) Greedy search MaxTries=D=tooBad=1
 Output: best setting (output)                                    early termination, don’t look ahead very deeply
while MaxTries-- do
    bad=0
    reset /* to initial conditions, or random choice */                    Simulated annealing (RS)
    while MaxChanges-- do                                                   Astar (PS)
       score = oracle(settings)                                            • MaxTries=1 (no retries)
                                                                            • MaxWalkSat
                                                                               Scoring = g(x)+h(x)
                                                                           • P=1 (I.e. local(rS)
                                                                                               search)
        If score > best then best = score , output=settings                 • •h(x) : a D=N=1to one solutions’ value
                                                                                P=0.5    guess
                                                                           • No biasing
                                                                                MaxFunWalk (rS)
        If score < notEnough             then bad++                         • •g(x) biasing cost to get here
                                                                                No : is the
                                                                                • Tabu MaxWalkSat,decisions made
                                                                                  Like search (PS) but score
                                                                                       e.g. number of
        If bad > tooBad                  then goto NEXT-TRY                   • Score computed from weighted
        if goal && (score-goal)/goal < ε then return settings                     • Bias new from JPL clauses
                                                                                   computed guesses away
                                                                                sum of satisfiedmodels
                                                                                   requirements CNF
                                                                                    from old ones
        If   rand() < p                                                        Different to Astar:
        then settings = guess /* random change, perhaps biased */              • tabu list logs even
        else settings = local search D deep for N next best settings             the unsuccessful
        fi
                                                                                 explorations.
      update biases
   done
   NEXT-TRY
          Promise
done
             2008                                         18                                                       May 1, 2008
West Virginian University
                                                                                      Modelling Intelligence Lab
                                                                                    http://unbox.org/wisp/tags/keys



       Treatment learning (Menzies et al. ‘03)
 Input:                                                       (P) Path search, fill in settings one at a time
                                                               (S) State search: fills in entire settings array
    – settings a,b,c,… to choices x,y,z…                       (R) Random search: p>=0 uses stochastic guessing,
    – oracle(x=a,y=b,z=c,…)  score                                multiple runs, maybe multiple answers
                                                               (G) Greedy search MaxTries=D=tooBad=1
 Output: best setting (output)                                    early termination, don’t look ahead very deeply
while MaxTries-- do
    bad=0
    reset /* to initial conditions, or random choice */                    Simulated annealing (RS)
    while MaxChanges-- do                                                   Astar (PS)
       score = oracle(settings)                                            • MaxTries=1 (no retries)
                                                                            • MaxWalkSat
                                                                               Scoring = g(x)+h(x)
                                                                           • P=1 (I.e. local(rS)search)
        If score > best then best = score , output=settings                 • •h(x) : a D=N=1to one solutions’ value
                                                                                P=0.5    guess
                                                                           • No biasing
                                                                                MaxFunWalk (rS)
        If score < notEnough             then bad++                         • •g(x) biasing cost to get here
                                                                                No : is the
                                                                                • Tabu MaxWalkSat,decisions made
                                                                                  Like search (PS) but score
                                                                                       e.g. number of
        If bad > tooBad                  then goto NEXT-TRY                   • Score computed from weighted
        if goal && (score-goal)/goal < ε then return settings                     •Treatment from JPL (PS)
                                                                                   computed guesses away
                                                                                    Bias new learning
                                                                                sum of satisfiedmodels
                                                                                   requirements CNF clauses
                                                                                   •from old ones
                                                                                      P=D=N=1
        If   rand() < p
        then settings = guess /* random change, perhaps biased */                  • MaxChanges much smaller
        else settings = local search D deep for N next best settings                  than |settings|
        fi                                                                         • Bias = the lift heuristic
                                                                                   • Returns the top N best settings
      update biases
   done
   NEXT-TRY
          Promise
done
             2008                                         19                                                       May 1, 2008
West Virginian University
                                                                                      Modelling Intelligence Lab
                                                                                    http://unbox.org/wisp/tags/keys


                              KEYS (Jalali et al. 08)
 Input:                                                       (P) Path search, fill in settings one at a time
                                                               (S) State search: fills in entire settings array
    – settings a,b,c,… to choices x,y,z…                       (R) Random search: p>=0 uses stochastic guessing,
    – oracle(x=a,y=b,z=c,…)  score                                multiple runs, maybe multiple answers
                                                               (G) Greedy search MaxTries=D=tooBad=1
 Output: best setting (output)                                    early termination, don’t look ahead very deeply
while MaxTries-- do
    bad=0
    reset /* to initial conditions, or random choice */                    Simulated annealing (RS)
    while MaxChanges-- do                                                   Astar (PS)
       score = oracle(settings)                                            • MaxTries=1 (no retries)
                                                                            • MaxWalkSat
                                                                               Scoring = g(x)+h(x)
                                                                           • P=1 (I.e. local(rS)search)
        If score > best then best = score , output=settings                 • •h(x) : a D=N=1to one solutions’ value
                                                                                P=0.5    guess
                                                                           • No biasing
                                                                                MaxFunWalk (rS)
        If score < notEnough             then bad++                         • •g(x) biasing cost to get here
                                                                                No : is the
                                                                                • Tabu MaxWalkSat,decisions made
                                                                                  Like search (PS) but score
                                                                                       e.g. number of
        If bad > tooBad                  then goto NEXT-TRY                   • Score computed from weighted
        if goal && (score-goal)/goal < ε then return settings                     •Treatment from JPL (PS)
                                                                                   computed guesses away
                                                                                    Bias new learning
                                                                                sum of satisfiedmodels
                                                                                   requirements CNF clauses
                                                                                   •from old ones
                                                                                      D=N=1
                                                                                     KEYS (PRG)
        If   rand() < p
        then settings = guess /* random change, perhaps biased */                  • •P= -1; MaxTries=1 (no retries)
                                                                                      MaxChanges much smaller
        else settings = local search D deep for N next best settings                  than |settings|
                                                                                     •MaxChanges= |settings|
        fi                                                                         • • Each the lift heuristic
                                                                                     Bias = guess sets one
                                                                                   • Returns thechoice bestun-do)
                                                                                       one more top N (no settings
      update biases                               SPARK3:                            • Bias = BORE
   done                                         meta-search:
   NEXT-TRY
          Promise                              mix &match the
Done                                                above
             2008                                         20                                                       May 1, 2008
West Virginian University
                                                                Modelling Intelligence Lab
                                                              http://unbox.org/wisp/tags/keys


                    Status in the literature

   Simulated annealing
    –    Standard search-based SE tool
   Astar
    –    Standard search used in gaming
                                                     Simulated annealing (RS)
   MaxWalkSat
                                                      Astar (PS)
    –    State of the art in the AI literature       • MaxTries=1 (no retries)
                                                      • MaxWalkSat
                                                         Scoring = g(x)+h(x)
                                                     • P=1 (I.e. local(rS)search)
   MaxFunWalk                                            P=0.5
                                                     • No biasing
                                                                   guess
                                                      • •h(x) : a D=N=1to one solutions’ value
                                                          MaxFunWalk (rS)
                                                      • •g(x) biasing cost to get here
                                                          No : is the
    –    New                                              • Tabu MaxWalkSat,decisions made
                                                            Like search (PS) but score
                                                                 e.g. number of
                                                        • Score computed from weighted
   Treatment learning                                      •Treatment from JPL (PS)
                                                             computed guesses away
                                                              Bias new learning
                                                          sum of satisfiedmodels
                                                             requirements CNF clauses
    –    How we used to do it                                •from old ones
                                                                D=N=1
                                                               KEYS (PRG)
         (RE’02: Menzies &Feather)                           • •P=0 -1; MaxTries=1smaller
                                                                MaxChanges much (no retries)
                                                                than |settings|
   KEYS                                                       •MaxChanges= |settings|
                                                             • • Each the lift heuristic
                                                               Bias = guess sets one
                                  SPARK4:
    –    New                  try other search
                                                             • Returns thechoice bestun-do)
                                                                 one more top N (no settings
                          methods: e.g. LDS, Beam,             • Bias = BORE
                                   DFID,…
        Promise
        2008                                 21                                              May 1, 2008
West Virginian University
                                                                                      Modelling Intelligence Lab
                                                                                    http://unbox.org/wisp/tags/keys



Results : 1000 runs
                                                            model1.c : very small
                                                                                                              model2.c
 model2
                                    Goal
                                    (max goals,
           ∑ $mitigations           min cost)


                                # goals reached
Averages, seconds

                                                            model3.c : very small
                                                                                                              model4.c
                                                  0.03 ?




Goals/cost: (less is worse):
• SA < MFW < astar < KEYS

Runtimes (less is best)
• astar < KEYS < MFW << SA
                                                                                                              model5.c
40 mins/ 0.048 secs = 50,000 times faster


                                 SPARK5: speed up
                               via low-level code
                                 optimizations?
            Promise
            2008                                           22                                                      May 1, 2008
West Virginian University


             Brittleness / variance results                            Modelling Intelligence Lab
                                                                     http://unbox.org/wisp/tags/keys


   One advantage of KEYS over ASTAR                      model4.c
     – Reports partial decisions
     – And the median/spread of those decisions
     – Usually, spread very very small
   Shows how brittle is the proposed solution
     – Allows business managers to select partial,
       good-enough solutions

model2.c




                                                          model5.c




                          SPARK6: for any prior
                     PROMISE results, explore
                     variance as well as median
           Promise            behavior
           2008                                      23                                             May 1, 2008
West Virginian University
                                                                    Modelling Intelligence Lab
                                                                  http://unbox.org/wisp/tags/keys




                                 Conclusions
 Prior PROMISE papers were data-intensive
                                                                      Six “sparks”
    – This paper is model- and algorithm- intensive                  proposed here:
                                                                 all based on existing
 Search-based software engineering                                 on-line material

    – AI design-as-search
    – Rich field for repeatable, refutable, improvable experimentation

 Vast improvement in our ability to optimize JPL requirements models
    – 50,000 times faster
    – Can (almost) now do it in real time with the experts' dialogue
        • Modulo incremental model compilation
    – (note: yet to be tested in a live project setting)

 New algorithm: “KEYS”
    – Beats standard methods (simulated annealing)
    – Best state of the art methods (MaxFunWalk)
    – Feel free to roll-your-own algorithm
        • Luke, use the “keys”
       Promise
       2008                                   24                                                 May 1, 2008
West Virginian University
                                             Modelling Intelligence Lab
                                           http://unbox.org/wisp/tags/keys




    Questions?
    Comments?
                           To reproduce this experiment
                           0. Under LINUX
#!/bin/bash                1. Write this to a file
                           2. Run “bash file”
mkdir ddp
cd ddp
svn co http://unbox.org/wisp/tags/ddpExperiment
svn co http://unbox.org/wisp/tags/keys
svn co http://unbox.org/wisp/tags/astar

More Related Content

Similar to Optimizing Requirements Decisions with KEYS

A tutorial on EMF-IncQuery
A tutorial on EMF-IncQueryA tutorial on EMF-IncQuery
A tutorial on EMF-IncQuery
Istvan Rath
 
The business case for automated software engineering
The business case for automated software engineering The business case for automated software engineering
The business case for automated software engineering
CS, NcState
 
Machine Learning
Machine LearningMachine Learning
Machine Learning
butest
 
MEME – An Integrated Tool For Advanced Computational Experiments
MEME – An Integrated Tool For Advanced Computational ExperimentsMEME – An Integrated Tool For Advanced Computational Experiments
MEME – An Integrated Tool For Advanced Computational Experiments
GIScRG
 
Os Django
Os DjangoOs Django
Os Django
oscon2007
 
Genetically Modified Organisms (Carrie)
Genetically Modified Organisms (Carrie)Genetically Modified Organisms (Carrie)
Genetically Modified Organisms (Carrie)
Eileen O'Connor
 
Introduction to Data Mining
Introduction to Data MiningIntroduction to Data Mining
Introduction to Data Mining
Kai Koenig
 
Secrets of Top Pentesters
Secrets of Top PentestersSecrets of Top Pentesters
Secrets of Top Pentesters
amiable_indian
 
Using Problem-Specific Knowledge and Learning from Experience in Estimation o...
Using Problem-Specific Knowledge and Learning from Experience in Estimation o...Using Problem-Specific Knowledge and Learning from Experience in Estimation o...
Using Problem-Specific Knowledge and Learning from Experience in Estimation o...
Martin Pelikan
 
On the Value of User Preferences in Search-Based Software Engineering
On the Value of User Preferences in Search-Based Software EngineeringOn the Value of User Preferences in Search-Based Software Engineering
On the Value of User Preferences in Search-Based Software Engineering
Abdel Salam Sayyad
 
Pivotal Labs Open View Presentation Quality Assurance And Developer Testing
Pivotal Labs Open View Presentation Quality Assurance And Developer TestingPivotal Labs Open View Presentation Quality Assurance And Developer Testing
Pivotal Labs Open View Presentation Quality Assurance And Developer Testing
guestc8adce
 
Data driven model optimization [autosaved]
Data driven model optimization [autosaved]Data driven model optimization [autosaved]
Data driven model optimization [autosaved]
Russell Jarvis
 
Abraham q3 2008
Abraham q3 2008Abraham q3 2008
Abraham q3 2008
Obsidian Software
 
Nanometer Testing: Challenges and Solutions
Nanometer Testing: Challenges and SolutionsNanometer Testing: Challenges and Solutions
Nanometer Testing: Challenges and Solutions
DVClub
 
Exploratory Testing in a chaotic world to share
Exploratory Testing in a chaotic world   to shareExploratory Testing in a chaotic world   to share
Exploratory Testing in a chaotic world to share
Doron Bar
 
Generative Adversarial Networks and Their Applications in Medical Imaging
Generative Adversarial Networks  and Their Applications in Medical ImagingGenerative Adversarial Networks  and Their Applications in Medical Imaging
Generative Adversarial Networks and Their Applications in Medical Imaging
Sanghoon Hong
 
User Centered Technology Group Overview
User Centered Technology Group OverviewUser Centered Technology Group Overview
User Centered Technology Group Overview
Jay Trimble
 
White-Box Software Testing using Evolutionary Algorithms
White-Box Software Testing using Evolutionary AlgorithmsWhite-Box Software Testing using Evolutionary Algorithms
White-Box Software Testing using Evolutionary Algorithms
Yaser Sulaiman
 
Acknowledge 03 Ramit Geert Thienpont
Acknowledge 03 Ramit Geert ThienpontAcknowledge 03 Ramit Geert Thienpont
Acknowledge 03 Ramit Geert Thienpont
imec.archive
 
RoSE Framework
RoSE FrameworkRoSE Framework
RoSE Framework
Alpen-Adria-Universität
 

Similar to Optimizing Requirements Decisions with KEYS (20)

A tutorial on EMF-IncQuery
A tutorial on EMF-IncQueryA tutorial on EMF-IncQuery
A tutorial on EMF-IncQuery
 
The business case for automated software engineering
The business case for automated software engineering The business case for automated software engineering
The business case for automated software engineering
 
Machine Learning
Machine LearningMachine Learning
Machine Learning
 
MEME – An Integrated Tool For Advanced Computational Experiments
MEME – An Integrated Tool For Advanced Computational ExperimentsMEME – An Integrated Tool For Advanced Computational Experiments
MEME – An Integrated Tool For Advanced Computational Experiments
 
Os Django
Os DjangoOs Django
Os Django
 
Genetically Modified Organisms (Carrie)
Genetically Modified Organisms (Carrie)Genetically Modified Organisms (Carrie)
Genetically Modified Organisms (Carrie)
 
Introduction to Data Mining
Introduction to Data MiningIntroduction to Data Mining
Introduction to Data Mining
 
Secrets of Top Pentesters
Secrets of Top PentestersSecrets of Top Pentesters
Secrets of Top Pentesters
 
Using Problem-Specific Knowledge and Learning from Experience in Estimation o...
Using Problem-Specific Knowledge and Learning from Experience in Estimation o...Using Problem-Specific Knowledge and Learning from Experience in Estimation o...
Using Problem-Specific Knowledge and Learning from Experience in Estimation o...
 
On the Value of User Preferences in Search-Based Software Engineering
On the Value of User Preferences in Search-Based Software EngineeringOn the Value of User Preferences in Search-Based Software Engineering
On the Value of User Preferences in Search-Based Software Engineering
 
Pivotal Labs Open View Presentation Quality Assurance And Developer Testing
Pivotal Labs Open View Presentation Quality Assurance And Developer TestingPivotal Labs Open View Presentation Quality Assurance And Developer Testing
Pivotal Labs Open View Presentation Quality Assurance And Developer Testing
 
Data driven model optimization [autosaved]
Data driven model optimization [autosaved]Data driven model optimization [autosaved]
Data driven model optimization [autosaved]
 
Abraham q3 2008
Abraham q3 2008Abraham q3 2008
Abraham q3 2008
 
Nanometer Testing: Challenges and Solutions
Nanometer Testing: Challenges and SolutionsNanometer Testing: Challenges and Solutions
Nanometer Testing: Challenges and Solutions
 
Exploratory Testing in a chaotic world to share
Exploratory Testing in a chaotic world   to shareExploratory Testing in a chaotic world   to share
Exploratory Testing in a chaotic world to share
 
Generative Adversarial Networks and Their Applications in Medical Imaging
Generative Adversarial Networks  and Their Applications in Medical ImagingGenerative Adversarial Networks  and Their Applications in Medical Imaging
Generative Adversarial Networks and Their Applications in Medical Imaging
 
User Centered Technology Group Overview
User Centered Technology Group OverviewUser Centered Technology Group Overview
User Centered Technology Group Overview
 
White-Box Software Testing using Evolutionary Algorithms
White-Box Software Testing using Evolutionary AlgorithmsWhite-Box Software Testing using Evolutionary Algorithms
White-Box Software Testing using Evolutionary Algorithms
 
Acknowledge 03 Ramit Geert Thienpont
Acknowledge 03 Ramit Geert ThienpontAcknowledge 03 Ramit Geert Thienpont
Acknowledge 03 Ramit Geert Thienpont
 
RoSE Framework
RoSE FrameworkRoSE Framework
RoSE Framework
 

More from gregoryg

Community-Assisted Software Engineering Decision Making
Community-Assisted Software Engineering Decision MakingCommunity-Assisted Software Engineering Decision Making
Community-Assisted Software Engineering Decision Making
gregoryg
 
The Robust Optimization of Non-Linear Requirements Models
The Robust Optimization of Non-Linear Requirements ModelsThe Robust Optimization of Non-Linear Requirements Models
The Robust Optimization of Non-Linear Requirements Models
gregoryg
 
Finding Robust Solutions to Requirements Models
Finding Robust Solutions to Requirements ModelsFinding Robust Solutions to Requirements Models
Finding Robust Solutions to Requirements Models
gregoryg
 
Distributed Decision Tree Induction
Distributed Decision Tree InductionDistributed Decision Tree Induction
Distributed Decision Tree Induction
gregoryg
 
Irrf Presentation
Irrf PresentationIrrf Presentation
Irrf Presentation
gregoryg
 
Confidence in Software Cost Estimation Results based on MMRE and PRED
Confidence in Software Cost Estimation Results based on MMRE and PREDConfidence in Software Cost Estimation Results based on MMRE and PRED
Confidence in Software Cost Estimation Results based on MMRE and PRED
gregoryg
 
Promise08 Wrapup
Promise08 WrapupPromise08 Wrapup
Promise08 Wrapup
gregoryg
 
Improving Analogy Software Effort Estimation using Fuzzy Feature Subset Selec...
Improving Analogy Software Effort Estimation using Fuzzy Feature Subset Selec...Improving Analogy Software Effort Estimation using Fuzzy Feature Subset Selec...
Improving Analogy Software Effort Estimation using Fuzzy Feature Subset Selec...
gregoryg
 
Software Defect Repair Times: A Multiplicative Model
Software Defect Repair Times: A Multiplicative ModelSoftware Defect Repair Times: A Multiplicative Model
Software Defect Repair Times: A Multiplicative Model
gregoryg
 
Complementing Approaches in ERP Effort Estimation Practice: an Industrial Study
Complementing Approaches in ERP Effort Estimation Practice: an Industrial StudyComplementing Approaches in ERP Effort Estimation Practice: an Industrial Study
Complementing Approaches in ERP Effort Estimation Practice: an Industrial Study
gregoryg
 
Multi-criteria Decision Analysis for Customization of Estimation by Analogy M...
Multi-criteria Decision Analysis for Customization of Estimation by Analogy M...Multi-criteria Decision Analysis for Customization of Estimation by Analogy M...
Multi-criteria Decision Analysis for Customization of Estimation by Analogy M...
gregoryg
 
Implications of Ceiling Effects in Defect Predictors
Implications of Ceiling Effects in Defect PredictorsImplications of Ceiling Effects in Defect Predictors
Implications of Ceiling Effects in Defect Predictors
gregoryg
 
Practical use of defect detection and prediction
Practical use of defect detection and predictionPractical use of defect detection and prediction
Practical use of defect detection and prediction
gregoryg
 
Risk And Relevance 20080414ppt
Risk And Relevance 20080414pptRisk And Relevance 20080414ppt
Risk And Relevance 20080414ppt
gregoryg
 
Organizations Use Data
Organizations Use DataOrganizations Use Data
Organizations Use Data
gregoryg
 
Cukic Promise08 V3
Cukic Promise08 V3Cukic Promise08 V3
Cukic Promise08 V3
gregoryg
 
Boetticher Presentation Promise 2008v2
Boetticher Presentation Promise 2008v2Boetticher Presentation Promise 2008v2
Boetticher Presentation Promise 2008v2
gregoryg
 
Elane - Promise08
Elane - Promise08Elane - Promise08
Elane - Promise08
gregoryg
 
Risk And Relevance 20080414ppt
Risk And Relevance 20080414pptRisk And Relevance 20080414ppt
Risk And Relevance 20080414ppt
gregoryg
 
Introduction Promise 2008 V3
Introduction Promise 2008 V3Introduction Promise 2008 V3
Introduction Promise 2008 V3
gregoryg
 

More from gregoryg (20)

Community-Assisted Software Engineering Decision Making
Community-Assisted Software Engineering Decision MakingCommunity-Assisted Software Engineering Decision Making
Community-Assisted Software Engineering Decision Making
 
The Robust Optimization of Non-Linear Requirements Models
The Robust Optimization of Non-Linear Requirements ModelsThe Robust Optimization of Non-Linear Requirements Models
The Robust Optimization of Non-Linear Requirements Models
 
Finding Robust Solutions to Requirements Models
Finding Robust Solutions to Requirements ModelsFinding Robust Solutions to Requirements Models
Finding Robust Solutions to Requirements Models
 
Distributed Decision Tree Induction
Distributed Decision Tree InductionDistributed Decision Tree Induction
Distributed Decision Tree Induction
 
Irrf Presentation
Irrf PresentationIrrf Presentation
Irrf Presentation
 
Confidence in Software Cost Estimation Results based on MMRE and PRED
Confidence in Software Cost Estimation Results based on MMRE and PREDConfidence in Software Cost Estimation Results based on MMRE and PRED
Confidence in Software Cost Estimation Results based on MMRE and PRED
 
Promise08 Wrapup
Promise08 WrapupPromise08 Wrapup
Promise08 Wrapup
 
Improving Analogy Software Effort Estimation using Fuzzy Feature Subset Selec...
Improving Analogy Software Effort Estimation using Fuzzy Feature Subset Selec...Improving Analogy Software Effort Estimation using Fuzzy Feature Subset Selec...
Improving Analogy Software Effort Estimation using Fuzzy Feature Subset Selec...
 
Software Defect Repair Times: A Multiplicative Model
Software Defect Repair Times: A Multiplicative ModelSoftware Defect Repair Times: A Multiplicative Model
Software Defect Repair Times: A Multiplicative Model
 
Complementing Approaches in ERP Effort Estimation Practice: an Industrial Study
Complementing Approaches in ERP Effort Estimation Practice: an Industrial StudyComplementing Approaches in ERP Effort Estimation Practice: an Industrial Study
Complementing Approaches in ERP Effort Estimation Practice: an Industrial Study
 
Multi-criteria Decision Analysis for Customization of Estimation by Analogy M...
Multi-criteria Decision Analysis for Customization of Estimation by Analogy M...Multi-criteria Decision Analysis for Customization of Estimation by Analogy M...
Multi-criteria Decision Analysis for Customization of Estimation by Analogy M...
 
Implications of Ceiling Effects in Defect Predictors
Implications of Ceiling Effects in Defect PredictorsImplications of Ceiling Effects in Defect Predictors
Implications of Ceiling Effects in Defect Predictors
 
Practical use of defect detection and prediction
Practical use of defect detection and predictionPractical use of defect detection and prediction
Practical use of defect detection and prediction
 
Risk And Relevance 20080414ppt
Risk And Relevance 20080414pptRisk And Relevance 20080414ppt
Risk And Relevance 20080414ppt
 
Organizations Use Data
Organizations Use DataOrganizations Use Data
Organizations Use Data
 
Cukic Promise08 V3
Cukic Promise08 V3Cukic Promise08 V3
Cukic Promise08 V3
 
Boetticher Presentation Promise 2008v2
Boetticher Presentation Promise 2008v2Boetticher Presentation Promise 2008v2
Boetticher Presentation Promise 2008v2
 
Elane - Promise08
Elane - Promise08Elane - Promise08
Elane - Promise08
 
Risk And Relevance 20080414ppt
Risk And Relevance 20080414pptRisk And Relevance 20080414ppt
Risk And Relevance 20080414ppt
 
Introduction Promise 2008 V3
Introduction Promise 2008 V3Introduction Promise 2008 V3
Introduction Promise 2008 V3
 

Recently uploaded

National Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practicesNational Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practices
Quotidiano Piemontese
 
Programming Foundation Models with DSPy - Meetup Slides
Programming Foundation Models with DSPy - Meetup SlidesProgramming Foundation Models with DSPy - Meetup Slides
Programming Foundation Models with DSPy - Meetup Slides
Zilliz
 
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
SOFTTECHHUB
 
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Speck&Tech
 
Artificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopmentArtificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopment
Octavian Nadolu
 
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
Neo4j
 
Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
Safe Software
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
mikeeftimakis1
 
Best 20 SEO Techniques To Improve Website Visibility In SERP
Best 20 SEO Techniques To Improve Website Visibility In SERPBest 20 SEO Techniques To Improve Website Visibility In SERP
Best 20 SEO Techniques To Improve Website Visibility In SERP
Pixlogix Infotech
 
GenAI Pilot Implementation in the organizations
GenAI Pilot Implementation in the organizationsGenAI Pilot Implementation in the organizations
GenAI Pilot Implementation in the organizations
kumardaparthi1024
 
Serial Arm Control in Real Time Presentation
Serial Arm Control in Real Time PresentationSerial Arm Control in Real Time Presentation
Serial Arm Control in Real Time Presentation
tolgahangng
 
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
名前 です男
 
TrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy Survey
TrustArc
 
AI 101: An Introduction to the Basics and Impact of Artificial Intelligence
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceAI 101: An Introduction to the Basics and Impact of Artificial Intelligence
AI 101: An Introduction to the Basics and Impact of Artificial Intelligence
IndexBug
 
“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”
Claudio Di Ciccio
 
How to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptxHow to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptx
danishmna97
 
“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...
“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...
“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...
Edge AI and Vision Alliance
 
UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5
DianaGray10
 
20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
Matthew Sinclair
 
Infrastructure Challenges in Scaling RAG with Custom AI models
Infrastructure Challenges in Scaling RAG with Custom AI modelsInfrastructure Challenges in Scaling RAG with Custom AI models
Infrastructure Challenges in Scaling RAG with Custom AI models
Zilliz
 

Recently uploaded (20)

National Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practicesNational Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practices
 
Programming Foundation Models with DSPy - Meetup Slides
Programming Foundation Models with DSPy - Meetup SlidesProgramming Foundation Models with DSPy - Meetup Slides
Programming Foundation Models with DSPy - Meetup Slides
 
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
 
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
 
Artificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopmentArtificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopment
 
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
 
Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
 
Best 20 SEO Techniques To Improve Website Visibility In SERP
Best 20 SEO Techniques To Improve Website Visibility In SERPBest 20 SEO Techniques To Improve Website Visibility In SERP
Best 20 SEO Techniques To Improve Website Visibility In SERP
 
GenAI Pilot Implementation in the organizations
GenAI Pilot Implementation in the organizationsGenAI Pilot Implementation in the organizations
GenAI Pilot Implementation in the organizations
 
Serial Arm Control in Real Time Presentation
Serial Arm Control in Real Time PresentationSerial Arm Control in Real Time Presentation
Serial Arm Control in Real Time Presentation
 
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
 
TrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy Survey
 
AI 101: An Introduction to the Basics and Impact of Artificial Intelligence
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceAI 101: An Introduction to the Basics and Impact of Artificial Intelligence
AI 101: An Introduction to the Basics and Impact of Artificial Intelligence
 
“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”
 
How to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptxHow to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptx
 
“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...
“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...
“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...
 
UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5
 
20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
 
Infrastructure Challenges in Scaling RAG with Custom AI models
Infrastructure Challenges in Scaling RAG with Custom AI modelsInfrastructure Challenges in Scaling RAG with Custom AI models
Infrastructure Challenges in Scaling RAG with Custom AI models
 

Optimizing Requirements Decisions with KEYS

  • 1. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys Optimizing Requirements Decisions With KEYS Omid Jalali1 Tim Menzies1 Martin Feather2 (with help from Greg Gay1) 1WVU 2JPL May 10, 2008 (for more info: tim@menzies.us) Promise Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not 2008 constitute or imply its endorsement by the United States Government.
  • 2. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys Introduction  Prior PROMISE papers were data-intensive Six “sparks” – This paper is model- and algorithm- intensive proposed here: all based on existing  Search-based software engineering on-line material – AI design-as-search – Rich field for repeatable, refutable, improvable experimentation  Vast improvement in our ability to optimize JPL requirements models – 50,000 times faster – Can (almost) now do it in real time with the experts' dialogue • Modulo incremental model compilation  New algorithm: “KEYS” – Beats standard methods (simulated annealing) – Best state of the art methods (MaxFunWalk) – Feel free to roll-your-own algorithm • Luke, use the “keys” Promise 2008 2 May 1, 2008
  • 3. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys “The Strangest Thing About Software”  Menzies ‘07, IEEE Computer (Jan) – Empirical results: • Many models contain “keys” • A small number of variables that set the rest – Theoretical results: • This empirical result is actually the expected case  So we can build very large models – And control them – Provided we can find and control the keys.  Keys are frequently used (by definition) – So you don’t need to hunt for them; they’ll find you – Find variables whose ranges select from very different outputs SPARK1: are keys in many models? Promise 2008 3 May 1, 2008
  • 4. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys Find KEYS with BORE (best or rest sampling)  Input: – settings a,b,c,… to choices x,y,z… Supports partial – oracle(x=a,y=b,z=c,…)  score solutions – N =100 (say)  Output: keys (e.g.) {x=a, y=b, z=c,….} sorted by impact on score keys = {} while ( |keys| < |Choices| ) do era++ for i = 1 to N Inputs[i] = keys + random guesses for the other Choices scores[i] = oracle(Input[I]) scores = sort(scores); median = scores[n/2] print era, median , ( scores[n*3/4] - median ) divide inputs into “best” (10% top score) and “rest” ∀ (b,r) frequency of setting in (best, rest) rank[setting] = b2/(b+r) keys = keys ∪ rank.sort.first.setting Solutions not brittle done Promise 2008 4 May 1, 2008
  • 5. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys About DDP (The case study we will use to assess KEYS)
  • 6. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys Risks (damage goals) DDP: JPL requirements models goals  Mission concept meetings: – several multi-hour brainstorming sessions to design deep space missions – Staffed by 10-20 of NASA’s top experts – Limited time to discuss complex issues – Produces wide range of options: Mitigations Promise (reduce risks, 2008 cost $$$) 6 May 1, 2008
  • 7. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys RE’02: Feather & Menzies • TAR2 = treatment learner best • weighted class; contrast set; association rule learner • Assumption of minimality • Handles very large dimensionality • JPL: found best in 99 Boolean attributes=1030 options • At JPL, Martin Feather, TAR2 vs…. • SA:, simulated annealing baseline • Results nearly same • TAR2: faster earlier mean convergence • SA: used 100% variables • TAR2: used 33% variables Runtime = 40 mins Promise 2008 7 May 1, 2008
  • 8. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys 40 minutes: too slow  Extrapolating size of JPL requirements models: – Worse for 0(2n) runtimes  Victims of our success – The more we can automate • The more the users want – re-run all prior designs – re-run all variants of current design – re-run assigning with different maximum budgets – do all the above, while keeping up with a fast pace dialogue Promise 2008 8 May 1, 2008
  • 9. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys From 40 minutes to 15 seconds (160 * faster)  Knowledge compilation (to “C”) – Pre-compute and cache common tasks – No more Visual Basic – Search engines and model • Can run in one process SPARK2: optimizing • Can communicate without incremental intermediary files knowledge compilation Promise x= x quot; min(x) 2008 max(x) quot; min(x) 9 May 1, 2008 !
  • 10. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys Search algorithms (which we will use to comparatively assess KEYS)
  • 11. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys A generic search algorithm  Input: – settings a,b,c,… to choices x,y,z… – oracle(x=a,y=b,z=c,…)  score  Output: best setting (output) while MaxTries-- do bad=0 reset /* to initial conditions, or random choice */ while MaxChanges-- do score = oracle(settings) If score > best then best = score , output=settings If score < notEnough then bad++ If bad > tooBad then goto BREAK if goal && (score-goal)/goal < ε then return settings If rand() < p then settings = guess /* random change, perhaps biased */ else settings = local search D deep for N next best settings fi update biases done BREAK: Promise done 2008 11 May 1, 2008
  • 12. West Virginian University Modelling Intelligence Lab Some terminology: http://unbox.org/wisp/tags/keys State, Path, Random, Greedy  Input: (P) Path search, fill in settings one at a time (S) State search: fills in entire settings array – settings a,b,c,… to choices x,y,z… (R) Random search: p>=0 uses stochastic guessing, – oracle(x=a,y=b,z=c,…)  score multiple runs, maybe multiple answers (G) Greedy search MaxTries=D=tooBad=1  Output: best setting (output) early termination, don’t look ahead very deeply while MaxTries-- do bad=0 reset /* to initial conditions, or random choice */ while MaxChanges-- do score = oracle(settings) If score > best then best = score , output=settings If score < notEnough then bad++ If bad > tooBad then goto NEXT-TRY if goal && (score-goal)/goal < ε then return settings If rand() < p then settings = guess /* random change, perhaps biased */ else settings = local search D deep for N next best settings fi update biases done NEXT-TRY Promise done 2008 12 May 1, 2008
  • 13. West Virginian University Modelling Intelligence Lab Some terminology: http://unbox.org/wisp/tags/keys State, Path, Random, Greedy  Input: (P) Path search, fill in settings one at a time (S) State search: fills in entire settings array – settings a,b,c,… to choices x,y,z… (R) Random search: p>=0 uses stochastic guessing, – oracle(x=a,y=b,z=c,…)  score multiple runs, maybe multiple answers (G) Greedy search MaxTries=D=tooBad=1  Output: best setting (output) early termination, don’t look ahead very deeply while MaxTries-- do bad=0 reset /* to initial conditions, or random choice */ while MaxChanges-- do score = oracle(settings) If score > best then best = score , output=settings If score < notEnough then bad++ If bad > tooBad then goto NEXT-TRY if goal && (score-goal)/goal < ε then return settings If rand() < p then settings = guess /* random change, perhaps biased */ else settings = local search D deep for N next best settings fi update biases done NEXT-TRY Promise done 2008 13 May 1, 2008
  • 14. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys Simulated annealing (Kirkpatrick et al.’83)  Input: (P) Path search, fill in settings one at a time (S) State search: fills in entire settings array – settings a,b,c,… to choices x,y,z… (R) Random search: p>=0 uses stochastic guessing, – oracle(x=a,y=b,z=c,…)  score multiple runs, maybe multiple answers (G) Greedy search MaxTries=D=tooBad=1  Output: best setting (output) early termination, don’t look ahead very deeply while MaxTries-- do bad=0 reset /* to initial conditions, or random choice */ Simulated annealing (RS) while MaxChanges-- do score = oracle(settings) • MaxTries=1 (no retries) • P= 1 (I.e. no local search) If score > best then best = score , output=settings • No biasing If score < notEnough then bad++ If bad > tooBad then goto NEXT-TRY if goal && (score-goal)/goal < ε then return settings If rand() < p then settings = guess /* random change, perhaps biased */ else settings = local search D deep for N next best settings fi update biases done NEXT-TRY Promise done 2008 14 May 1, 2008
  • 15. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys Astar (Hart et al. ‘68)  Input: (P) Path search, fill in settings one at a time (S) State search: fills in entire settings array – settings a,b,c,… to choices x,y,z… (R) Random search: p>=0 uses stochastic guessing, – oracle(x=a,y=b,z=c,…)  score multiple runs, maybe multiple answers (G) Greedy search MaxTries=D=tooBad=1  Output: best setting (output) early termination, don’t look ahead very deeply while MaxTries-- do bad=0 reset /* to initial conditions, or random choice */ Simulated annealing (RS) while MaxChanges-- do Astar (PS) score = oracle(settings) • MaxTries=1 (no retries) • P= -1 D=N=1 • P=1 (I.e. local search) If score > best then best = score , output=settings Scoring = g(x)+h(x) • No biasing If score < notEnough then bad++ • h(x) : a guess to one solutions’ value If bad > tooBad then goto NEXT-TRY • g(x) : is the cost to get here if goal && (score-goal)/goal < ε then return settings e.g. number of decisions made Tightly controlled bias If rand() < p • OPEN list= available options then settings = guess /* random change, perhaps biased */ • On selection, option moves from else settings = local search D deep for N next best settings OPEN to CLOSED, never to be fi used again update biases done NEXT-TRY Promise done 2008 15 May 1, 2008
  • 16. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys MaxWalkSat (Kautz et.al ‘96)  Input: (P) Path search, fill in settings one at a time (S) State search: fills in entire settings array – settings a,b,c,… to choices x,y,z… (R) Random search: p>=0 uses stochastic guessing, – oracle(x=a,y=b,z=c,…)  score multiple runs, maybe multiple answers (G) Greedy search MaxTries=D=tooBad=1  Output: best setting (output) early termination, don’t look ahead very deeply while MaxTries-- do bad=0 reset /* to initial conditions, or random choice */ Simulated annealing (RS) while MaxChanges-- do Astar (PS) score = oracle(settings) • MaxTries=1 (no retries) • MaxFunWalk Scoring = g(x)+h(x) • P=1 (I.e. local (rS) search) If score > best then best = score , output=settings • •h(x) : a D=N=1to one solutions’ value P=0.5 guess • No biasing If score < notEnough then bad++ • •g(x) biasing cost to get here No : is the If bad > tooBad then goto NEXT-TRY e.g. number of decisions made • Score computed from weighted if goal && (score-goal)/goal < ε then return settings sum of satisfied CNF clauses If rand() < p then settings = guess /* random change, perhaps biased */ else settings = local search D deep for N next best settings fi update biases done NEXT-TRY Promise done 2008 16 May 1, 2008
  • 17. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys MaxWalkFun (Gay, 2008)  Input: (P) Path search, fill in settings one at a time (S) State search: fills in entire settings array – settings a,b,c,… to choices x,y,z… (R) Random search: p>=0 uses stochastic guessing, – oracle(x=a,y=b,z=c,…)  score multiple runs, maybe multiple answers (G) Greedy search MaxTries=D=tooBad=1  Output: best setting (output) early termination, don’t look ahead very deeply while MaxTries-- do bad=0 reset /* to initial conditions, or random choice */ Simulated annealing (RS) while MaxChanges-- do Astar (PS) score = oracle(settings) • MaxTries=1 (no retries) • MaxWalkSat Scoring = g(x)+h(x) • P=1 (I.e. local(rS) search) If score > best then best = score , output=settings • •h(x) : a D=N=1to one solutions’ value P=0.5 guess • No biasing MaxFunWalk (rS) If score < notEnough then bad++ • •g(x) biasing cost to get here No : is the • Like MaxWalkSat,decisions made e.g. number of but score If bad > tooBad then goto NEXT-TRY • Score computed from weighted if goal && (score-goal)/goal < ε then return settings computed from JPL sum of satisfiedmodels requirements CNF clauses If rand() < p then settings = guess /* random change, perhaps biased */ else settings = local search D deep for N next best settings fi update biases done NEXT-TRY Promise done 2008 17 May 1, 2008
  • 18. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys Tabu Search (Glover ‘89)  Input: (P) Path search, fill in settings one at a time (S) State search: fills in entire settings array – settings a,b,c,… to choices x,y,z… (R) Random search: p>=0 uses stochastic guessing, – oracle(x=a,y=b,z=c,…)  score multiple runs, maybe multiple answers (G) Greedy search MaxTries=D=tooBad=1  Output: best setting (output) early termination, don’t look ahead very deeply while MaxTries-- do bad=0 reset /* to initial conditions, or random choice */ Simulated annealing (RS) while MaxChanges-- do Astar (PS) score = oracle(settings) • MaxTries=1 (no retries) • MaxWalkSat Scoring = g(x)+h(x) • P=1 (I.e. local(rS) search) If score > best then best = score , output=settings • •h(x) : a D=N=1to one solutions’ value P=0.5 guess • No biasing MaxFunWalk (rS) If score < notEnough then bad++ • •g(x) biasing cost to get here No : is the • Tabu MaxWalkSat,decisions made Like search (PS) but score e.g. number of If bad > tooBad then goto NEXT-TRY • Score computed from weighted if goal && (score-goal)/goal < ε then return settings • Bias new from JPL clauses computed guesses away sum of satisfiedmodels requirements CNF from old ones If rand() < p Different to Astar: then settings = guess /* random change, perhaps biased */ • tabu list logs even else settings = local search D deep for N next best settings the unsuccessful fi explorations. update biases done NEXT-TRY Promise done 2008 18 May 1, 2008
  • 19. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys Treatment learning (Menzies et al. ‘03)  Input: (P) Path search, fill in settings one at a time (S) State search: fills in entire settings array – settings a,b,c,… to choices x,y,z… (R) Random search: p>=0 uses stochastic guessing, – oracle(x=a,y=b,z=c,…)  score multiple runs, maybe multiple answers (G) Greedy search MaxTries=D=tooBad=1  Output: best setting (output) early termination, don’t look ahead very deeply while MaxTries-- do bad=0 reset /* to initial conditions, or random choice */ Simulated annealing (RS) while MaxChanges-- do Astar (PS) score = oracle(settings) • MaxTries=1 (no retries) • MaxWalkSat Scoring = g(x)+h(x) • P=1 (I.e. local(rS)search) If score > best then best = score , output=settings • •h(x) : a D=N=1to one solutions’ value P=0.5 guess • No biasing MaxFunWalk (rS) If score < notEnough then bad++ • •g(x) biasing cost to get here No : is the • Tabu MaxWalkSat,decisions made Like search (PS) but score e.g. number of If bad > tooBad then goto NEXT-TRY • Score computed from weighted if goal && (score-goal)/goal < ε then return settings •Treatment from JPL (PS) computed guesses away Bias new learning sum of satisfiedmodels requirements CNF clauses •from old ones P=D=N=1 If rand() < p then settings = guess /* random change, perhaps biased */ • MaxChanges much smaller else settings = local search D deep for N next best settings than |settings| fi • Bias = the lift heuristic • Returns the top N best settings update biases done NEXT-TRY Promise done 2008 19 May 1, 2008
  • 20. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys KEYS (Jalali et al. 08)  Input: (P) Path search, fill in settings one at a time (S) State search: fills in entire settings array – settings a,b,c,… to choices x,y,z… (R) Random search: p>=0 uses stochastic guessing, – oracle(x=a,y=b,z=c,…)  score multiple runs, maybe multiple answers (G) Greedy search MaxTries=D=tooBad=1  Output: best setting (output) early termination, don’t look ahead very deeply while MaxTries-- do bad=0 reset /* to initial conditions, or random choice */ Simulated annealing (RS) while MaxChanges-- do Astar (PS) score = oracle(settings) • MaxTries=1 (no retries) • MaxWalkSat Scoring = g(x)+h(x) • P=1 (I.e. local(rS)search) If score > best then best = score , output=settings • •h(x) : a D=N=1to one solutions’ value P=0.5 guess • No biasing MaxFunWalk (rS) If score < notEnough then bad++ • •g(x) biasing cost to get here No : is the • Tabu MaxWalkSat,decisions made Like search (PS) but score e.g. number of If bad > tooBad then goto NEXT-TRY • Score computed from weighted if goal && (score-goal)/goal < ε then return settings •Treatment from JPL (PS) computed guesses away Bias new learning sum of satisfiedmodels requirements CNF clauses •from old ones D=N=1 KEYS (PRG) If rand() < p then settings = guess /* random change, perhaps biased */ • •P= -1; MaxTries=1 (no retries) MaxChanges much smaller else settings = local search D deep for N next best settings than |settings| •MaxChanges= |settings| fi • • Each the lift heuristic Bias = guess sets one • Returns thechoice bestun-do) one more top N (no settings update biases SPARK3: • Bias = BORE done meta-search: NEXT-TRY Promise mix &match the Done above 2008 20 May 1, 2008
  • 21. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys Status in the literature  Simulated annealing – Standard search-based SE tool  Astar – Standard search used in gaming Simulated annealing (RS)  MaxWalkSat Astar (PS) – State of the art in the AI literature • MaxTries=1 (no retries) • MaxWalkSat Scoring = g(x)+h(x) • P=1 (I.e. local(rS)search)  MaxFunWalk P=0.5 • No biasing guess • •h(x) : a D=N=1to one solutions’ value MaxFunWalk (rS) • •g(x) biasing cost to get here No : is the – New • Tabu MaxWalkSat,decisions made Like search (PS) but score e.g. number of • Score computed from weighted  Treatment learning •Treatment from JPL (PS) computed guesses away Bias new learning sum of satisfiedmodels requirements CNF clauses – How we used to do it •from old ones D=N=1 KEYS (PRG) (RE’02: Menzies &Feather) • •P=0 -1; MaxTries=1smaller MaxChanges much (no retries) than |settings|  KEYS •MaxChanges= |settings| • • Each the lift heuristic Bias = guess sets one SPARK4: – New try other search • Returns thechoice bestun-do) one more top N (no settings methods: e.g. LDS, Beam, • Bias = BORE DFID,… Promise 2008 21 May 1, 2008
  • 22. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys Results : 1000 runs model1.c : very small model2.c model2 Goal (max goals, ∑ $mitigations min cost) # goals reached Averages, seconds model3.c : very small model4.c 0.03 ? Goals/cost: (less is worse): • SA < MFW < astar < KEYS Runtimes (less is best) • astar < KEYS < MFW << SA model5.c 40 mins/ 0.048 secs = 50,000 times faster SPARK5: speed up via low-level code optimizations? Promise 2008 22 May 1, 2008
  • 23. West Virginian University Brittleness / variance results Modelling Intelligence Lab http://unbox.org/wisp/tags/keys  One advantage of KEYS over ASTAR model4.c – Reports partial decisions – And the median/spread of those decisions – Usually, spread very very small  Shows how brittle is the proposed solution – Allows business managers to select partial, good-enough solutions model2.c model5.c SPARK6: for any prior PROMISE results, explore variance as well as median Promise behavior 2008 23 May 1, 2008
  • 24. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys Conclusions  Prior PROMISE papers were data-intensive Six “sparks” – This paper is model- and algorithm- intensive proposed here: all based on existing  Search-based software engineering on-line material – AI design-as-search – Rich field for repeatable, refutable, improvable experimentation  Vast improvement in our ability to optimize JPL requirements models – 50,000 times faster – Can (almost) now do it in real time with the experts' dialogue • Modulo incremental model compilation – (note: yet to be tested in a live project setting)  New algorithm: “KEYS” – Beats standard methods (simulated annealing) – Best state of the art methods (MaxFunWalk) – Feel free to roll-your-own algorithm • Luke, use the “keys” Promise 2008 24 May 1, 2008
  • 25. West Virginian University Modelling Intelligence Lab http://unbox.org/wisp/tags/keys Questions? Comments? To reproduce this experiment 0. Under LINUX #!/bin/bash 1. Write this to a file 2. Run “bash file” mkdir ddp cd ddp svn co http://unbox.org/wisp/tags/ddpExperiment svn co http://unbox.org/wisp/tags/keys svn co http://unbox.org/wisp/tags/astar