DFSS for Software-Intensive Systems
Upcoming SlideShare
Loading in...5
×
 

DFSS for Software-Intensive Systems

on

  • 612 views

 

Statistics

Views

Total Views
612
Views on SlideShare
612
Embed Views
0

Actions

Likes
0
Downloads
22
Comments
0

0 Embeds 0

No embeds

Accessibility

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Historically, six sigma has been tied to applying statistics to the way that we view and analyze processes for producing a product. This thought process has also been tied to the way that we design a product. A point to consider, is whether six sigma should be tied to the process for designing a product, or to the analysis of the performance of the product. A common theme we see in development program post-mortems is that the integrity and stability of system requirements are the predominant factors in our ability to meet PD milestones and budgets. Statistical methods and tools now exist for the System Engineer which will facilitate early requirements definition, and will assist in trade studies as far back as Science and Technology phases. Likewise, statistical and other DFSS methods and tools are available for detail design. These capabilities coupled with a well-defined integrated Product Development methodology give a whole new meaning and great improvements to the concept of “transition to production”. The result: more robust requirements, more robust designs, reduced production costs, but also reduced risk in meeting Product Development schedules.
  • The three areas of DFSS are: Affordability – includes Design to Cost, Cost as an Independent Variable, life cycle costs (not just acquisition costs); how much will the product cost the customer long-term? Producibility – ensuring the design is producible, manufacturable (DFMA, PCAT) Performance – ensures the design meets the performance requirements that drive customer success; starts with Voice of the Customer, translates to key performance parameters, then to the subassemblies/component specifications that drive the KPPs. Robust prediction of performance “sigma”, i.e. Cp, Cpk.
  • Understanding the Voice of the Customer Creating and Evaluating Architectural Alternatives Modeling Performance, Affordability and Producibility Exploring and Optimizing the Design Trade Space Defect Containment and Test Optimization
  • S1: startup state S2: Software Under Test state S3: Operational This type of testing allows you to enter and exit multiple states and find “hidden bugs”. A bug was found in the software with memory leak after entering the S5 state three times that could not have been found with traditional testing. This is more realistic from a user/Customer view, instead of the traditional model that simply shows whether a state can be entered and exited.
  • From the previous visual representation of the problem, we can now discuss how DFSS and statistics can be used in design. First, a functional representation of the system function is needed. This can be in the form of an equation, model, or computer simulation. This representation is no different than that typically used by engineering to predict a specific response for the system. The problem is that most of these representations provide a nominal or deterministic analysis of the response. This means that one set of inputs give one output. Variation is taken into account by trying different parameter values. The process can be inefficient. We need to transform this deterministic approach to a probabilistic approach.
  • This module deal with quantifying how well our processes meet customer specifications.
  • What is obtained as a result of applying DFSS in product design? First, we must begin with a representation of the system response. This can be based on the system block diagram, hierarchical representation, equations, computer simulation, etc. What we want to obtain is a description of the response statistical properties, the predicted probability of non-compliance, and an assessment of how each parameter contributes to the variation of the response. The Probability of NonCompliance and Contribution to Variation are two of the most critical results of a DFSS analysis. From the probabilistic analysis, the response variation can be expressed in the form of statistical parameters, histogram, or PDF. If spec limits are defined for the response, then they can be placed on the distribution and the PNC computed. Next, the variation of each system parameter can be correlated with the variation of the response to provide contribution information. This results information is use in the allocation of tolerances and definition of process capabilities.
  • This slide is perhaps TOO busy – I’m lost reading it. Any way to split it into 2 or 3???
  • Delete the heading “Defect Density Bounds”

DFSS for Software-Intensive Systems DFSS for Software-Intensive Systems Presentation Transcript

  • DFSS for Software-Intensive Systems Neal Mackertich & Trindy LeForge Raytheon Company
  • Software Project Challenges Right Architecture? The Voice of the Customer Resource Availability Defect Management Product Cost What to Test? Schedule pressure Technical Performance
  • Process Integration IPDS provides an integrated set of best practices for the entire product development life cycle. through a just-in-time tailoring process. R6s is a business strategy for process improvement. It guides us to use CMMI and IPDS as tools to deliver value to customers and integrate industry best practices. CMMI provides guidance for creating, measuring,managing, and improving specific processes. Each plays an integral role in the success of programs, projects and organizations Programs integrate R6Sigma, IPDS, and CMMI into their plans
  • Design for Six Sigma DFSS Design for Six Sigma is a methodology used within IPDP to predict, manage and improve Producibility, Affordability , and Robust Performance for the benefit of our customers . In essence, DFSS is the application of R6s in product optimization Affordability Producibility Robust Performance
  • DFSS Enablement of Software Project Performance Program Execution Gates 5-11 Business Decision Gates 1-4 5 - System Integration, Verification and Validation 4 - Product Design and Development 1 - Capture/Proposal Development 7 8 10 9 = GATE Internal Preliminary Design Review Start-Up Review Internal System Functional Review Internal Critical Design Review Internal Test/Ship Readiness Review Internal Production Readiness Review 2 - Project Planning, Management and Control 1 2 3 4 Bid / Proposal Review 11 Test Optimization Architectural Evaluation VOC Modeling & Analysis Defect Prediction& Prevention Critical Chain Project Management Performance Modeling DFSS is the application of Six Sigma in product optimization Planning 6 - Production and Deployment Planning 7 - Operations and Support 3 - Requirements and Architecture Development 5 6
  • Usage Based Modeling S1 S3 S5 Exit S4 S2 S6 30% 70% 100% 100% 10% 100% 25% 75% 90%
    • States - represent pertinent usage history i.e.. The state of the software from an external user’s perspective.
    • Arcs - represent state transitions caused by applying stimuli
    • Transition probabilities - simulate expected user behavior
  • VOC Analysis using QFD
    • Customer Priorities
      • Reduce Cost & Schedule
      • Increase Interoperability & Performance
      • Increase System-of-Systems Interoperability
      • Increase Reuse
      • Reduce Proprietary Software
    • Baseline Design Alternatives
      • System-of-Systems Components
      • JLENS Spiral 2 Components
      • SLAMRAAM Components
    SLAMRAAM provides low-cost design alternatives Baseline elements with limited Reuse & interoperability Baseline elements that improve interoperability Availability of SoS elements has significant impacts on schedule VOICE OF THE CUSTOMER PRIORITIES4 COMPETITIVE EVALUATION HOWs DIRECTION OF IMPROVEMENT RELATIONSHIP MATRIX CORRELATION MATRIX HOW MUCHes RANKINGS / PRIORITIES
  • “ New & Nifty” Pen – QFD Example Design a Pen for Students Ages 5 - 18
    • “ If the system is going to be built by more than one person - and these days, what system isn’t - it is the architecture that lets them communicate and negotiate work assignments. If the requirements include goals for performance, security, reliability, or maintainability, then architecture is the design artifact that first expresses how the system will be built to achieve those goals.”
    • “ Evaluating Software Architectures- Methods & Case Studies” SEI Series in Software Engineering (Clements, Kazman & Klein)
    • “… it is possible to evaluate an architecture and its design decisions, to analyze decisions, in the context of the goals and requirements that are levied on systems that will be built from it.”
    • (Paul Clements- “Evaluating Software Architectures”)
    Architectural Evaluation
  • Architectural Tradeoff Analysis Method
    • ATAM is a structured process in which the right questions are asked early to:
      • Discover risks -- alternatives that might create future problems
      • Discover sensitivity points -- alternatives for which a slight change makes a significant difference
      • Discover tradeoffs -- decisions affecting more than one attribute
    • DD(X) customer observation: “The ATAM report represents a very good application of the ATAM, given the relative newness of the methodology and without assistance from ATAM expertise. Many new risks were identified and documented and many more existing risks in the risk register were corroborated as a result. A variety of scenarios and threads were generated during the ATAM and over 130 sensitivity points were identified and mapped to risks. As a result of the ATAM tradeoff and sensitivity analyses, (there were) nine identified significant findings.”
    • Architectural FMECA is a systematic technique utilized to:
    • Identify architectural risk and opportunities
    • Prioritize according to:
      • Impact
      • Criticality
      • Probability of occurrence
      • Probability of early detection
    • Identify alternatives / hybrids to leverage opportunities and mitigation risk
    • Document and track actions taken to reduce risk
    • Improve the probability of success
    Architectural FMECA
  • Critical Chain Project Management
    • Critical Chain Multi-Project Management was first developed by Dr. Eliyahu M. Goldratt founder of The Theory of Constraints
    • An established and practical project management technique companies have successfully used to reduce development cycle time
      • Based on statistical methodologies
    • Being used at Raytheon and across industry
    • Increasingly accepted by our customers
  • Multi-Tasking is an Insidious Generator of… When (if) we look at our “Resource Allocation” we typically discover high degrees of multi-tasking … … waste Why don’t we meet our schedule commitments?!? Resource 1 Resource 3 Time Resource 2
    • Assigning more than one task to a resource
    Effect of Multi-Tasking Expectations : Task A Task B Task C What happens: A B C A B C Lost Productivity due to context switching Should happen: Task B Task C Task A Priorities are established and followed
  • Protecting Our Customers
    • A single “project buffer” could be defined to cover variation that occurs on all individual tasks (project buffer is not same thing as management reserve)
    • The project buffer PROTECTS OUR CUSTOMERS from the variability and uncertainty within each task
    • The PM can now manage the uncertainty as a whole, not as a compilation of the individual tasks
    1 2 4 3 5 6 7 8 9 Buffer 95%
  • It all starts with your model...
    • An Equation, Model, or Simulation of the system function is usually available:
    Input Output y = f(x) Spreadsheets Need to transform this deterministic approach to a probabilistic approach.
  • DFSS Statistical Requirements Analysis A B C D E Y Y = f (A, B, C, D, E, F,...,M) 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 180 187 194 201 208 215 222 229 236 243 250 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 17 17.6 18.2 18.8 19.4 20 20.6 21.2 21.8 22.4 23 0 0.2 0.4 0.6 0.8 1 1.2 1.4 15 15.8 16.6 17.4 18.2 19 19.8 20.6 21.4 22.2 23 0 0.2 0.4 0.6 0.8 1 1.2 1.4 15 16.5 18 19.5 21 22.5 24 25.5 27 28.5 30 0 0.2 0.4 0.6 0.8 1 1.2 1.4 15 16.5 18 19.5 21 22.5 24 25.5 27 28.5 30 Response F G H I J K L M Design Variables Allocation/Flow Down 0 0.05 0.1 0.15 0.2 0.25 15 16.5 18 19.5 21 22.5 24 25.5 27 28.5 30
  • DFSS Performance Analysis results in...
    • A prediction of the Response Statistical Properties
    • A prediction of the Probability of Non-Compliance
    • An assessment of the Contribution of Parameter Variation to the Response Variation
    Results from Crystal Ball  Monte Carlo SW A B C D E Y f (A, B, C, D, E, F,...,M) Parameters Response F G H I J K L M Function G-Sys. Losses -.45 A-Pavg .35 D-Ant. Eff, .35 F-Integ. Eff. .34 J-Rec. BW -.34 B-Ant. Gain .29 H-Tgt RCS .23 C-Ant. Aperture .21 K-Pulse Width -.19 M-Rec. Out SNR -.15 I-Noise Figure -.12 L-Rep. Freq. -.03 -1 -0.5 0 0.5 1 Measured by Rank Correlation Prob(LL<Y<UL)  y ,  y ,  1y ,  2y PDF(Y)
  • Defect Density Predictive Model Motivation
    • Defect Containment is a lagging indicator.
      • Within a single development program it is difficult at best to:
        • Analyze the data
        • Determine root cause
        • Take action
        • Measure results
      • Data does not give the complete picture until the development is over.
    • An effective defect prediction model provides the ability to mix actual and predictive performance to show what the picture will look like before you get there.
      • This enables us to make better decisions and take more effective preventive action.
  • Defect Model Previous Program/Release Standard Defect Containment Chart Model Raw Defect Counts Prorate the raw defect counts by the ratio of: SLOC_New/SLOC_Model Projected Raw Defect Counts For New Program New Program Defect Containment Current Raw Defect Counts Stage % Complete System Requirements Analysis A Software Requirements Analysis B Preliminary Design C Detailed Design D Code E Unit Test F Software Integration G Software Qualification Test H System Integration I System Test J Post System Test K Current SLOC Estimate = XXXXXXX Percent complete is supplied on an as-needed for analysis (minimum monthly). % should match EVMS. Current SLOC estimate is required for model calibration – should match tracking book The model assumption is that if a stage is XX% complete, then we have detected XX% of the defects “normally” found in that stage. Percent complete is used as a multiplier (1-%complete) for the raw defect counts as for each stage detected to calculate defect counts for defects that are not yet detected. New Program -Projected Defect Containment Sum of Current Raw Defect Counts Plus Raw Defects Not Yet Detected Model Charts Normalized By Size Defect Density Predictive Model Approach
  • Defect Density By Stage Detected and Originated
    • Defect density by stage uses total defects and source lines of code counts to assess the efficiency of catching and containing software defects in the various lifecycle stages.
  • Defect Density – Code Inspection Analysis Tips
    • Tips: The first level of understanding in analyzing Defect Density at Code Inspection is:
      • Analysis of each data point resulting from code peer review using the Code Inspection Calculator.
    When number of defects is between the control limits, fix findings and pass to next stage . When number of defects is below the lower limit, unit(s) may need more inspection. When number of defects is above the upper limit, unit(s) may need rework and a repeated inspection.
  • Test Optimization
    • Industry studies have show test to represent between 30 and 50% of product development costs. If this is even close to accurate, test represents fertile ground for optimization.
    • Towards this end, The Department of Defense and the National Academy of Sciences have cited two industry best practices in the area statistically based test optimization:
    • Usage-based Stochastic Modeling (based on VOC analysis)
    • Combinatorial Design Methods (CDM)
  • CDM Advantages / Results
    • N-way combinations provides reasonable, statistically balanced coverage to be augment with domain expertise.
    • CDM is more realistic than full/fractional experimental designs:
      • Compatible with constraints
      • Compatible with factors at different levels
      • Can account for previous test
    • Drastically reduces the total number of test cases when compared to all combinations.
    • Since generating test cases is very quick and simple, there are no major barriers to using CDM as part of the testing process.
    • Can be used in almost all phases of systems & subsystem testing.
    • IIS Satellite Ground Systems CDM application resulted in significant schedule (68%) and cost savings (67%).
  • A Testing Scenario 22 Test Cases 2 Way Combinations 1920 Test Cases All Combinations If Destination Format is GIF, then # colors cannot be 16 bit or 24 bit. Constraints Correct conversion (True or False) Outputs Source Format (GIF, JPG, TIFF, PNG) Dest. Format (GIF, JPG, TIFF, PNG) Size (Small, Med, Large) # colors (4 bit, 8 bit, 16 bit, 24 bit) Destination (Local drive, network drive) Windows Version (95, 98, NT, 2000, Me) Inputs Graphics manipulation function that converts from one format to another. Example Application Black-box type testing geared to functional requirements of an application. Definition
  • World-Class Software DFSS
      • Voice of the Customer modeling and analysis an integral part of the Requirements analysis process.
      • Up-front Software Architectural trade space evaluation (vs. validation).
      • Statistically managing our software development cycle time using critical chain concepts.
      • Statistical modeling & optimization of the performance / cost software design trade space.
      • Enabled decision-making through predictive defect modeling.
      • Stochastically modeled System Integration, Verification & Validation.