Scalable and Cost-Effective Model-Based Software Verification and Testing

678 views

Published on

Distinguished lecture, University of California, Irvine, May 17th 2013

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
678
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
39
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Scalable and Cost-Effective Model-Based Software Verification and Testing

  1. 1. Scalable and Cost-Effective Model-Based SoftwareVerification and TestingUniversity of LuxembourgInterdisciplinary Centre for Security, Reliability and TrustSoftware Verification and Validation Lab (www.svv.lu)May 17th, 2013University of California, IrvineLionel Briand, IEEE FellowFNR PEARL Chair
  2. 2. Luxembourg•  Small country andpopulation•  One of the wealthiest in theworld•  Young university (2003)and Ph.D. programs (2007)•  ICT security and reliability,a national research priority•  Priorities implemented asinterdisciplinary centres•  International•  Three official languages:English, French, German2
  3. 3. SnT Software Verification and Validation Lab•  SnT centre, Est. 2009: Interdisciplinary, ICT security-reliability-trust•  180 scientists and Ph.D. candidates, 20 industry partners•  SVV Lab: Established January 2012, www.svv.lu•  15 scientists (Research scientists, associates, and PhD candidates)•  Industry-relevant research on system dependability: security, safety,reliability•  Four partners: Cetrel, CTIE, Delphi, SES, …3
  4. 4. Research Paradigm•  Research informed by practice•  Well-defined problems in context•  Realistic evaluation•  Long term industrial collaborations4
  5. 5. Acknowledgements•  Shiva Nejati•  Mehrdad Sabetzadeh•  Yvan Labiche•  Andrea Arcuri•  Stefano Di Alesio•  Reza Matinnejad•  Zohaib Iqbal•  Shaukat Ali•  Hadi Hemmati•  Marwa Shousha•  …5
  6. 6. “Model-based”?•  All engineering disciplines relyon abstraction and thereforemodels•  In most cases, it is the only wayto effectively automate testing orverification•  Models have many otherpurposes: Communication,support requirements and design•  There are many ways to modelsystems and their environment•  In a given context, this choice isdriven by the application domain,standards and practices,objectives, and skills6
  7. 7. Models in Software Engineering•  Model: An abstract and analyzable description of software artifacts,created for a purpose7Requirements models Architecturemodels BehaviouralmodelsTestmodels•  Abstract: Details are omitted. Partial representation. Much smaller andsimpler than the artifact being modeled.•  Analyzable: Leads to task automation
  8. 8. Talk Objectives•  Overview of several years of research•  Examples, at various levels of details•  Follows a research paradigm that is uncommon in software engineeringresearch•  Conducted in collaboration with industry partners in many applicationdomains: Automotive, energy, telecom …•  Lessons learned regarding scalability and cost-effectiveness8
  9. 9. 9ObjectiveFunctionSearchSpaceSearchTechnique  Search to optimizeobjective function:Complete or not,deterministic or partlyrandom (stochastic)  Metaheuristics,constraint solvers  Scalability: A smallpart of the searchspace is traversed  Model: Guidance toworst case, high riskscenarios across space  Heuristics: Extensiveempirical studies arerequiredResearch Pattern: Models and Search Heuristics
  10. 10. Early Work: Search-Based SchedulabilityAnalysisL. Briand, Y. Labiche, and M. Shousha, 2003-200610
  11. 11. •  Real-time scheduling theory–  Given priorities, execution time, periods (periodic task), minimuminter-arrival times (aperiodic task), …–  Is a group of (a)periodic tasks schedulable?–  Theory to determine schedulability•  Independent periodic tasks: Rate Monotonic Algorithm (RMA)•  Aperiodic or dependent tasks: Generalized Completion TimeTheorem (GCTT).•  GCTT assumes–  aperiodic tasks equivalent to periodic tasks•  periods = minimum inter-arrival times–  aperiodic tasks ready to start at time zero•  Execution times are estimatest2t2t2t202468101214161820minimuminterarrival time: 8Schedulability Theory11
  12. 12. A Search-based Solution•  Goal: Make no assumptions and find near deadline misses aswell, identify worst case scenarios•  Population-based metaheuristic: Genetic Algorithm•  To automate, based on the system task architecture (UML SPT,MARTE), the derivation of arrival times for task triggering eventsthat maximize the chances of critical deadline misses.12Aperiodic tasksPeriodic tasksSystemEvent 1Event 2+GeneticAlgorithm=Arrival timesEvent 1Event 1Event 2time
  13. 13. Model as Input13UML-MARTEModel(Task architecture)GAScheduler(constraint solver)• Chromosome• Fitness evaluationTask priorities…Estimated execution time,Minimum inter-arrivaltime,…Arrival/seeding timesStart times,Pre-emption
  14. 14. Objective Function•  Focus on one target task at a time•  Goal: Guide the search towards arrival times causing the greatestdelays in the executions of the target task•  Properties:–  Handle deadline misses–  Consider all task executions, not just worst case execution–  Reward task executions so that many good executions do not windup overshadowing one bad execution14
  15. 15. Objective Function II∑=−=tjtjtkjdeChf1,,2)(t: target taskkt: maximum number of executions of te: estimated end time of execution j of target task as determined byschedulerd: deadline of execution j of target taskfe-d015
  16. 16. Case Study•  Software Engineering Institute (SEI), Naval Weapons Center and IBM’sFederal Sector Division•  Hard real-time, realistic avionics application model similar to existingU.S. Navy and Marine aircrafts•  Eight highest priority tasks deemed schedulable•  Our findings suggest three of eight tasks produce systematic deadlinemisses16
  17. 17. 00200470N/AN/A3, 9N/AN/A17, 16, 10, 91, 29, 23, 2, 28, 27, 32N/ANumberof MissesValue ofMissesWeapon ReleaseWeapon Release SubtaskRadar Tracking FilterRWR Contact ManagementData Bus Poll DeviceWeapon AimingRadar Target UpdateNavigation UpdateResults
  18. 18. Conclusions•  We devised a method to generate event seeding times for aperiodictasks so as identifying deadline miss scenarios based on task designinformation•  Near deadline misses as well! (stress testing)•  Standard modeling notation (UML/SPT/MARTE)•  No dedicated, additional modeling compared to what is expected whendefining a task architecture•  Scalability: GA runs lasted a few minutes on regular PC•  Default GA parameters, as recommended in literature, work well•  Large empirical studies to evaluate the approach (heuristics)•  Similar work with concurrency analysis: Deadlocks, data races, etc.(Shousha, Briand, Labiche, 2008-2012)
  19. 19. Testing Driven by Environment ModelingZ. Iqbal, A. Arcuri, L. Briand, 2009-201219
  20. 20. •  Three-year project with two industry partners–  Soft real-time systems: deadlines in order of hundreds ofmilliseconds•  Jitter of few milliseconds acceptable–  Automation of test cases and oracle generation, environmentsimulationTomra – Bottle Recycling MachineWesternGeco – Marine SeismicAcquisition SystemContext  
  21. 21. •  Independent–  Black-box•  Behavior driven by environment–  Environment model•  Software engineers•  No use of Matlab/Simulink•  One model for–  Environment simulator–  Test cases and oracles•  UML profile (+ limited use ofMARTE)EnvironmentSimulatorTest casesEnvironment ModelsTest oracleEnvironment  Modeling  and  Simula4on  
  22. 22. Domain  Model  
  23. 23. Behavior  Model  
  24. 24. •  Test cases are defined by–  Simulation configuration–  Environment configuration•  Environment Configuration–  Number of instances to be created for each component in thedomain model (e.g., the number of sensors)•  Simulator Configuration–  Setting of non-deterministic attribute values•  Test oracle: Environment model error states–  A successful test case is one which leads the environment intoan error stateTest  Cases  
  25. 25. •  Bring the system state to an error state by searching for appropriatevalues for non-deterministic environment attributes•  Search heuristics are based on fitness functions assessing how“close” is the current state to an error state•  Different metaheuristics: Genetic algorithm, (1+1) EA•  Defining the fitness function based on model information was highlycomplex: OCL constraints, combination of many heuristics•  Industrial case study and artificial examples showed the heuristicwas effective–  (1+1) EA better than GA25Search  Objec4ves  and  Heuris4cs  
  26. 26. •  Evaluates how “good” the simulator configurations are•  Can only be decided after the execution of a test case•  Decided based on heuristics: How close was the test case to …26•  Approach Level•  reach an error state?•  Branch Distance•  solve the guard on abranch leading to an errorstate?•  defined search heuristicsfor OCL expressions*•  Time Distance•  take a time transition thatleads to an error state?Basic  Ideas  about  the  Fitness  Func4on  * S. Ali, M.Z. Iqbal, A. Arcuri, L. Briand, "Generating Test Data from OCL Constraints with Search Techniques", forthcoming in IEEETransactions on Software Engineering
  27. 27. Constraint Optimization to Verify CPU UsageS. Nejati, S. Di Alesio, M. Sabetzadeh, L. Briand, 201227
  28. 28. System: fire/gas detection andemergency shutdown28Drivers(Software-Hardware Interface)Control Modules Alarm Devices(Hardware)Multicore Archt.Real Time Operating System
  29. 29. Safety Drivers May Overload the CPU•  Drivers need to bridge the timing gapsbetween SW and HW•  SIL 3•  Drivers have flexible design•  Parallel threads communicating in anasynchronous way•  Period and watchdog threads•  Drivers are subject to real-time constraints tomake sure they do not overuse the CPU time,e.g., “The processor spare-time should not beless than 80% at any time”•  Determine how many driver instances todeploy on a CPU29Drivers(Software-Hardware Interface)
  30. 30. 30Pull Data IODispatch Push DataPeriodic PeriodicWatchDogDelay(offset)Small DelayLarge DelayT2 consumes a lot of CPU timeT2 may blockT1 and/or T3T1 T3T2Deadline missesCPU overloadCommunication protocolwith configurable parametersProcessingcontrol dataSending HWcommandsI/O Driver
  31. 31. Safety standardsTo achieve SIL levels 3-4,Stress Testing is “HighlyRecommended”IEC 61508 is a Safety Standardincluding guidelines for PerformanceTestingiec.ch31
  32. 32. Search Objective•  Stress test cases: Values for environment-dependent parameters of theembedded software, e.g., the size of time delays used in software tosynchronize with hardware devices or to receive feedback from thehardware devices.•  Goal: select delays to maximize the use of the CPU while satisfyingdesign constraints.32
  33. 33. General Approach: Modeling and OptimizationConstraintProgrammingModelingConstraintProgramStress Test Cases(Delay values leading to worstcase CPU time usage)PerformanceRequirements(objective functions)System design andplatform Model (UML/MARTE)INPUTOUTPUTCP Engine(COMET)INPUT
  34. 34. Information Requirements34SchedulerActivity- preemptive : bool- min duration(min_d) :int- max duration (max_d) :int- delay :intProcessing Unit- number of cores :intGlobal Clock- time : intThread1.. *SchedulingPolicyusesschedules1.. *1*11**Data dependency ( )- priority :int- period (p) :int- min inter-arrival time (min_ia) :int- max inter-arrival time (max_ia) :intorderedAsynch0..1***temporal precedence( )- Start()- Finish()- Wait()- Sleep()- Resume()- Trigger()Buffer- size : int0..11.. *- access()ttriggersComputing Platform Embedded Software Appallocated11**Synchuses1.. *0..11 **runs *
  35. 35. MARTE: Augmented Sequence Diagrams35IOTask Push DataMailBoxSome abstractions are design choices:delays, priorities…Pull DatascanSome others depend on theenvironment: arrival times…ThreadBufferActivitydelaydurationdeadline
  36. 36. COMET input languageDesign properties include: threads,priorities, activities, durations…Preemptions at regular time periods(quanta)Assume negligible context switchingtime compared to time quantumPlatform and design properties areconstants in our Constraint ProgramPlatform and Design Properties modeled inUML are provided as input in our ConstraintProgram// 1) Input: Time and Concurrency informationint c = ...; // #Coresint n = ...; range J = 0..n-1; // #Threadsint priority[J] = ...; // Priorities// ...// 2) Output: Scheduling variablesdvar int arrival_time[a in A] in T;// Actual arrival timesdvar int start[a in A] in est[a]..lst[a];// Actual start timesdvar int end[a in A] in eet[a]..let[a];// Actual end times// ...// 3) Objective function: Performance Requirementmaximizesum(a in A)(maxl(0, minl(1,deadline_miss[a]))); // Deadline misses function// 4) Constraints: Scheduling policysubject to {forall(a in A) {wf4: start[a] <= end[a];// Threads should end after their start time// ...
  37. 37. Threads Properties which can be tuned duringtesting are the output of our Constraint ProgramTunable Parameters may includedesign and real time propertiesTunable Parameters are variablesin our Constraint ProgramSome tunable parameters are thebasis for the definition to stresstest cases (e.g., delays), othersare results from scheduling// 1) Input: Time and Concurrency informationint c = ...; // #Coresint n = ...; range J = 0..n-1; // #Threadsint priority[J] = ...; // Priorities// ...// 2) Output: Scheduling variablesdvar int arrival_time[a in A] in T;// Actual arrival timesdvar int start[a in A] in est[a]..lst[a];// Actual start timesdvar int end[a in A] in eet[a]..let[a];// Actual end times// ...// 3) Objective function: Performance Requirementmaximizesum(a in A)(maxl(0, minl(1,deadline_miss[a]))); // Deadline misses function// 4) Constraints: Scheduling policysubject to {forall(a in A) {wf4: start[a] <= end[a];// Threads should end after their start time// ...
  38. 38. The Performance Requirement is modeled asan objective function to maximizeWe focus on objective functions forCPU Usage hereEach objective function models aspecific performance requirementTesting a different performancerequirements only requires tochange the objective function(constraints)// 1) Input: Time and Concurrency informationint c = ...; // #Coresint n = ...; range J = 0..n-1; // #Threadsint priority[J] = ...; // Priorities// ...// 2) Output: Scheduling variablesdvar int arrival_time[a in A] in T;// Actual arrival timesdvar int start[a in A] in est[a]..lst[a];// Actual start timesdvar int end[a in A] in eet[a]..let[a];// Actual end times// ...// 3) Objective function: Performance Requirementmaximizesum(a in A)(maxl(0, minl(1,deadline_miss[a]))); // Deadline misses function// 4) Constraints: Scheduling policysubject to {forall(a in A) {wf4: start[a] <= end[a];// Threads should end after their start time// ...
  39. 39. Constraints express relationshipsbetween constants and variablesConstraints are independent and canbe modified to fit different platforms,for example scheduling algorithmThe Platform scheduler and properties aremodeled through a set of constraints// 1) Input: Time and Concurrency informationint c = ...; // #Coresint n = ...; range J = 0..n-1; // #Threadsint priority[J] = ...; // Priorities// ...// 2) Output: Scheduling variablesdvar int arrival_time[a in A] in T;// Actual arrival timesdvar int start[a in A] in est[a]..lst[a];// Actual start timesdvar int end[a in A] in eet[a]..let[a];// Actual end times// ...// 3) Objective function: Performance Requirementmaximizesum(a in A)(maxl(0, minl(1,deadline_miss[a]))); // Deadline misses function// 4) Constraints: Scheduling policysubject to {forall(a in A) {wf4: start[a] <= end[a];// Threads should end after their start time// ...
  40. 40. We run an experiment with real data andtwo different objective functionsBut optimal solutions were found shortly after the searchstarted, even if the search took a much more time to terminateIt took a significant amount oftime for the search to terminateCase Study (Driver)40(Time) (Time)Max: 50%Max: 50%Max:550 msMax:550 mstermination time termination timemination time termination time(Value)(% for cpu usage,ms for makespan)of maximizing fmakespan and fusage (Section 4) for both parallel and non-mplementations.nts. To identify the suspicious hardware configurations, however, the
  41. 41. Conclusions•  We re-express test case generation for CPUusage requirements as a constraintoptimization problem•  Approach:–  A conceptual model for time abstractions–  Mapping to MARTE–  A constraint optimization formulation ofthe problem–  Application of the approach to a real casestudy (albeit small)•  Using a constraint solver does not seem toscale to large numbers of threads•  Currently continues this work with Delphi usingmetaheuristic search: 430 tasks, powertrainsystems, AUTOSAR41
  42. 42. Testing Closed Loop ControllersR. Matinnejad, S. Nejati, L. Briand, T. Bruckmann, C. Poull,201342
  43. 43. Complexity  and  amount  of  soDware  used  on  vehicles’    Electronic  Control  Units  (ECUs)  grow  rapidly    More functionsComfort and varietySafety and reliabilityFaster time-to-marketLess fuel consumptionGreenhouse gas emission laws43
  44. 44. Three  major  soDware  development  stages  in    the  automo4ve  domain  44
  45. 45. Major  Challenges  in  MiL-­‐SiL-­‐HiL  Tes4ng    •  Manual test case generation•  Complex functions at MiL, and large and integratedsoftware/embedded systems at HiL•  Lack of precise requirements and testing Objectives•  Hard to interpret the testing results45
  46. 46. MiL  tes4ng  RequirementsThe ultimate goal of MiL testing is toensure that individual functionsbehave correctly and timely on anyhardware configurationIndividual Functions46
  47. 47. Main  differences  between  automo4ve  func4on    tes4ng  and  general  soDware  tes4ng  •  Continuous behaviour•  Time matters a lot•  Several configurations–  Huge number of detailed physical measures/ranges/thresholds captured by calibration values47
  48. 48. A  Taxonomy  of  Automo4ve  Func4ons  ControllingComputationState-Based ContinuousTransforming Calculatingunit convertors calculating positions,duty cycles, etcState machinecontrollersClosed-loopcontrollers (PID)Different testing strategies are required fordifferent types of functions48
  49. 49. Controller Plant Model and its RequirementsPlantModelController(SUT)Desired value ErrorActual valueSystem output+-=<~= 0>=time time timeDesiredValue&ActualValueDesired ValueActual Value(a) (b) (c)Liveness Smoothness Responsivenessxyzvw49
  50. 50. 50Types  of  Requirements  SBPC function shall guarantee that the flap will move to and will stabilizeat its desired position within xx ms. Further, the flap shall reach within yy%of its desired position within zz ms. In addition, after reaching vv% close tothe desired position, the flap shall not jump to a position more than ww%away from its desired position.(1) functional/liveness (2) responsiveness/performance(3) smoothness/safetyxxzzyy<ww50
  51. 51. 51Search  Elements  •  Search:•  Inputs: Initial and desired values, configuration parameters•  (1+1) EA•  Search Objective:•  Example requirement that we want to test: liveness |Desired - Actual(final)|~= 0For each set of inputs, we evaluate the objective function over the resultingsimulation graphs:•  Result:• worst case scenarios or values to the input variables that aremore likely to break the requirement at MiL level• stress test cases based on actual hardware (HiL) 51
  52. 52. MiL-Testing of Continuous ControllersExploration+Controller-plant modelObjectiveFunctionsOverviewDiagramTestScenariosList ofRegionsLocal SearchDomainExperttimeDesired ValueActual Value0 1 20.00.10.20.30.40.50.60.70.80.91.0Initial DesiredFinal Desired52
  53. 53. Generated Heatmap Diagrams(a) Liveness (b) Smoothness(c) Responsiveness53
  54. 54. Random Search vs. (1+1)EAExample with Responsiveness AnalysisRandom (1+1) EA54
  55. 55. •  We found much worse scenarios during MiL testing than our partnerhad found so far•  They are running them at the HiL level, where testing is much moreexpensive: MiL results -> test selection for HiL•  On average, the results of the single-state search showed significantimprovements over the result of the exploration algorithm•  Configuration parameters?•  Need more exploitative or explorative search algorithms in differentsubregionsConclusions  i.e., 31s. Hence, the horizontal axis of the diagrams in Figure 8 shows the number ofiterations instead of the computation time. In addition, we start both random search and(1+1) EA from the same initial point, i.e., the worst case from the exploration step.Overall in all the regions, (1+1) EA eventually reaches its plateau at a value higherthan the random search plateau value. Further, (1+1) EA is more deterministic than ran-dom, i.e., the distribution of (1+1) EA has a smaller variance than that of random search,especially when reaching the plateau (see Figure 8). In some regions (e.g., Figure 8(d)),however, random reaches its plateau slightly faster than (1+1) EA, while in some otherregions (e.g. Figure 8(a)), (1+1) EA is faster. We will discuss the relationship betweenthe region landscape and the performance of (1+1) EA in RQ3.RQ3. We drew the landscape for the 11 regions in our experiment. For example, Fig-ure 9 shows the landscape for two selected regions in Figures 7(a) and 7(b). Specifically,Figure 9(a) shows the landscape for the region in Figure 7(b) where (1+1) EA is fasterthan random, and Figure 9(b) shows the landscape for the region in Figure 7(a) where(1+1) EA is slower than random search.0.300.310.320.330.340.350.360.370.380.390.400.70 0.71 0.72 0.73 0.74 0.75 0.76 0.77 0.78 0.79 0.800.100.110.120.130.140.150.160.170.180.190.200.90 0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1.00(a) (b)Fig. 9. Diagrams representing the landscape for two representative HeatMap regions: (a) Land-scape for the region in Figure 7(b). (b) Landscape for the region in Figure 7(a).55
  56. 56. Minimizing CPU Time Shortage Risks inIntegrated Embedded SoftwareS. Nejati, M. AdedjoumaL. Briand, T. Bruckmann, C. Poull, 201356
  57. 57. Today’s  cars  rely  on  integrated  systems  •  Modular and independent development•  Huge opportunities for division of labor andoutsourcing•  Need for reliable and effective integrationprocesses57
  58. 58. An  overview  of  an  integra4on  process  in  the    automo4ve  domain  AUTOSAR Modelssw runnablessw runnablesAUTOSAR ModelsGlue58
  59. 59. AUTOSAR  captures  the  4ming  proper4es  of  the    Runnables  and  their  data  dependencies        Component TypePortRunnableOS Task- periodInternal behaviour- priority- cycleDataRead/WriteAccessInterfaceDataSend/ReceivePoint1 *1*1111sw runnablessw runnablesrGlue CodeOSIntegrated Systemegration Process.the maximum CPU usage.gorithms can automaticallyum CPU usage is as low asximum CPU usage had notet computation algorithms, Delphi. In addition, ourximum CPU usage that areby a random strategy, thatn. Finally, our approach isgy.E AUTOMOTIVE DOMAINon modern vehicles’ Elec-Component TypePortRunnableOS Task- periodInternal behaviour- priority- cycleDataRead/WriteAccessInterfaceDataSend/ReceivePoint1 *1*1111Fig. 2. AUTOSAR concepts capturing the timing properties of the runnablesand the data dependencies between them./* Declarations for variables, runnables, and dependencies */.../* Initializations, constructors and local functions */.../* Executing runnables */Task o1() { /* cycle o1 = 5 ms */if ( (timer mod r1.period) == 0) do /* timer mod 10 == 0 */execute runnable r1if ( (timer mod r2.period) == 0) do /* timer mod 20 == 0 */execute runnable r2if ( (timer mod r3.period) == 0) do /* timer mod 100 == 0 */execute runnable r3...}Task o2() {...Fig. 3. Simplified structure of the glue code in Figure 1.AUTOSAR metamodel [13]. In AUTOSAR, runnables are theatomic units of computation. They can be viewed as concur-Runnables Glue Code:59
  60. 60. 60CPU  Time  Shortage  in  Integrated  Embedded    SoDware  •  Challenge–  Many OS tasks and their many runnables run within a limitedavailable CPU time•  The execution time of the runnables may exceed the OS cycles•  Our goal–  Reducing the maximum CPU time used per time slot to beable to•  Minimize the hardware cost•  Enable addition of new functions incrementally•  Reduce the possibility of overloading the CPU in practice5ms 10ms 15ms 20ms 25ms 30ms 35ms 40ms✗5ms 10ms 15ms 20ms 25ms 30ms 35ms 40ms✔(a)(b)Fig. 4. Two possible CPU time usage simulations for an OS task with a 5mscycle: (a) Usage with bursts, and (b) Desirable usage.its corresponding glue code starts by a set of declarationsand definitions for components, runnables, ports, etc. It thenincludes the initialization part followed by the execution part.In the execution part, there is one routine for each OS task.These routines are called by the scheduler of the underlyingOS in every cycle of their corresponding task. Inside eachOS task routine, the runnables related to that OS task arecalled based on their period. For example, in Figure 3, weassume that the cycle of the task o1 is 5ms, and the periodto deadrun pasby OSdivisibloffset vsynchrosupposeslot becand r2time sloThereAUTOSCF1.time isallocatithe disk60
  61. 61. 61We  minimize  the  maximum  CPU  usage  using  runnables    offsets  (delay  :mes)  5ms 10ms 15ms 20ms 25ms 30ms 35ms 40ms5ms 10ms 15ms 20ms 25ms 30ms 35ms 40ms✗✔Inserting runnables’offsetsOffsets have to be chosen such thatthe maximum CPU usage per time slot is minimized, and further,the runnables respect their periodthe runnables respect the OS cyclesthe runnnables satisfy their synchronization constraints61
  62. 62. 62Meta  heuris4c  search    algorithms  Case Study: an automotive software system with 430 runnablesRunning the system without offsetsSimulation for the runnables in our case study andcorresponding to the lowest max CPU usage found by HC5.34 msOur optimized offset assignment2.13 ms-  The objective function is the max CPU usage of a 2s-simulation of runnables-  The search modifies one offset at a time, and updates other offsets only iftiming constraints are violated-  Single-state search algorithms for discrete spaces (HC, Tabu)-  Used restart option to make them more explorative62
  63. 63. 63Comparing  different  search  algorithms    (ms)(s)Best CPU usageTime to findBest CPU usage63
  64. 64. 64Comparing  our  best  search  algorithm  with  Random    search  (a) (b) (c)(a)Lowest max CPU usage values computed by HC within 70 msover 100 different runsLowest max CPU usage values computed by Randomwithin 70 ms over 100 different runsComparing average behavior of Random and HC in computinglowest max CPU usage values within 70 s and over 100 different runs64
  65. 65. 65Conclusions  -  We developed a number of searchalgorithms to compute offset values thatreduce the max CPU time needed-  Our evaluation shows that our approach isable to generate reasonably good resultsfor a large automotive system and in asmall amount of time-  Due to large number of runnables and theorders of magnitude difference inrunnables periods and their executiontimes, we were not able to use constraintsolvers65
  66. 66. MDE Projects Overview (< 5 years)66Company Domain Objective Notation AutomationABB Robot controller Testing UML Model traversal forcoverage criteriaCisco Video conference Testing (robustness) UML profile MetaheuristicKongsberg Maritime Fire and gas safetycontrol systemCertification SysML + reqtraceabilitySlicing algorithmKongsberg Maritime Oil&gas, safety criticaldriversCPU usage analysis UML+MARTE Constraint SolverFMC Subsea system AutomatedconfigurationUML profile Constraint solverWesternGeco Marine seismicacquisitionTesting UML profile + MARTE MetaheuristicDNV Marine and Energy,certification bodyCompliance with safetystandardsUML profile Constraint verificationSES Satellite operator Testing UML profile MetaheuristicDelphi Automotive systems Testing (safety+performance)Matlab/Simulink MetaheuristicLux. Tax department Legal & financial Legal Req. QA &testingUnder investigation Under investigation
  67. 67. Conclusions I•  Models–  UML profiles, MARTE, SysML, Matlab/Simulink, AUTOSAR–  Never UML only: always some tailoring required–  Always a specific modeling methodology, i.e., how to use thenotation => semantics–  Driven by•  Information that is needed to guide automation, e.g., search,optimization•  Appropriate level of abstraction: scalability, cost-effectiveness•  Test/Verification objectives•  Current modeling practices and skills•  Reliance on standards–  Modeling requirements must be “reasonable”, achievable, cost-effective for engineers67
  68. 68. Conclusions II•  Automation–  Many testing and verification problems can be re-expressed as asearch/optimization problem–  Search-based software engineering: Enabling automation for hardproblems–  But limited, though changing, applications to model-drivenengineering–  Models are needed to guide search and optimization–  Many search techniques: Search algorithm, fitness function …–  It is not easy to choose which one to use for a given problem–  Empirical studies–  Promising results, scalability (incomplete search)68
  69. 69. Discussions•  Constraint solvers (e.g., Comet, ILOG Cplex, SICStus)–  Is there an efficient constraint model for the problem at hand?–  Can effective heuristics be found to order the search be found?–  Better if problem can match a known standard problem, e.g., jobshop scheduling–  Tend to be strongly affected by small changes in the problem, e.g.,allowing task pre-emption•  Model checking–  Detailed operational models (e.g., state models, CTL),, involvingtemporal properties (e.g., CTL)–  Enough details to analyze statically or execute symbolically–  These modeling requirements are usually not realistic in actualsystem development–  Originally designed for checking temporal properties, as opposed toexplicit timing properties69
  70. 70. Discussions II•  Metaheuristic search–  Tends to be more versatile, tailorable to a new problem–  Entail lower modeling requirements–  Can provide responses at any time, without systematically anddeterministically searching the same part of the space at every run–  “best solution found within time constraints”, not a proof, nocertainty–  But in practice (complex) models are not fully correct, there is nocertainty70
  71. 71. Selected References•  L. Briand, Y. Labiche, and M. Shousha, “Using genetic algorithms for earlyschedulability analysis and stress testing in real-time systems”, GeneticProgramming and Evolvable Machines, vol. 7 no. 2, pp. 145-170, 2006•  M. Shousha, L. Briand, and Y. Labiche, “UML/MARTE Model Analysis Methodfor Uncovering Scenarios Leading to Starvation and Deadlocks in ConcurrentSystems”, IEEE Transactions on Software Engineering 38(2), 2012.•  Z. Iqbal, A. Arcuri, L. Briand, “Empirical Investigation of Search Algorithms forEnvironment Model-Based Testing of Real-Time Embedded Software”, ACMISSTA 2012•  S. Nejati, S. Di Alesio, M. Sabetzadeh, L. Briand, “Modeling and Analysis ofCPU Usage in Safety-Critical Embedded Systems to Support Stress Testing”,ACM/IEEE MODELS 2012•  Shiva Nejati, Mehrdad Sabetzadeh, Davide Falessi, Lionel C. Briand, ThierryCoq, “A SysML-based approach to traceability management and design slicingin support of safety certification: Framework, tool support, and case studies”,Information & Software Technology 54(6): 569-590 (2012)•  Lionel Briand et al., “Traceability and SysML Design Slices to Support SafetyInspections: A Controlled Experiment”, forthcoming in ACM Transactions onSoftware Engineering and Methodology, 201371
  72. 72. Selected References (cont.)•  Rajwinder Kaur Panesar-Walawege, Mehrdad Sabetzadeh, Lionel C. Briand:Supporting the verification of compliance to safety standards via model-drivenengineering: Approach, tool-support and empirical validation. Information &Software Technology 55(5): 836-864 (2013)•  Razieh Behjati, Tao Yue, Lionel C. Briand, Bran Selic: SimPL: A product-linemodeling methodology for families of integrated control systems. Information &Software Technology 55(3): 607-629 (2013)•  Hadi Hemmati, Andrea Arcuri, Lionel C. Briand: Achieving scalable model-basedtesting through test case diversity. ACM Trans. Softw. Eng. Methodol. 22(1): 6(2013)•  Nina Elisabeth Holt, Richard Torkar, Lionel C. Briand, Kai Hansen: State-BasedTesting: Industrial Evaluation of the Cost-Effectiveness of Round-Trip Path andSneak-Path Strategies. ISSRE 2012: 321-330•  Razieh Behjati, Tao Yue, Lionel C. Briand: A Modeling Approach to Support theSimilarity-Based Reuse of Configuration Data. MoDELS 2012: 497-513•  Shaukat Ali, Lionel C. Briand, Andrea Arcuri, Suneth Walawege: An IndustrialApplication of Robustness Testing Using Aspect-Oriented Modeling, UML/MARTE, and Search Algorithms. MoDELS 2011: 108-12272

×