Historically, six sigma has been tied to applying statistics to the way that we view and analyze processes for producing a product. This thought process has also been tied to the way that we design a product. A point to consider, is whether six sigma should be tied to the process for designing a product, or to the analysis of the performance of the product. A common theme we see in development program post-mortems is that the integrity and stability of system requirements are the predominant factors in our ability to meet PD milestones and budgets. Statistical methods and tools now exist for the System Engineer which will facilitate early requirements definition, and will assist in trade studies as far back as Science and Technology phases. Likewise, statistical and other DFSS methods and tools are available for detail design. These capabilities coupled with a well-defined integrated Product Development methodology give a whole new meaning and great improvements to the concept of “transition to production”. The result: more robust requirements, more robust designs, reduced production costs, but also reduced risk in meeting Product Development schedules.
The three areas of DFSS are: Affordability – includes Design to Cost, Cost as an Independent Variable, life cycle costs (not just acquisition costs); how much will the product cost the customer long-term? Producibility – ensuring the design is producible, manufacturable (DFMA, PCAT) Performance – ensures the design meets the performance requirements that drive customer success; starts with Voice of the Customer, translates to key performance parameters, then to the subassemblies/component specifications that drive the KPPs. Robust prediction of performance “sigma”, i.e. Cp, Cpk.
Understanding the Voice of the Customer Creating and Evaluating Architectural Alternatives Modeling Performance, Affordability and Producibility Exploring and Optimizing the Design Trade Space Defect Containment and Test Optimization
S1: startup state S2: Software Under Test state S3: Operational This type of testing allows you to enter and exit multiple states and find “hidden bugs”. A bug was found in the software with memory leak after entering the S5 state three times that could not have been found with traditional testing. This is more realistic from a user/Customer view, instead of the traditional model that simply shows whether a state can be entered and exited.
From the previous visual representation of the problem, we can now discuss how DFSS and statistics can be used in design. First, a functional representation of the system function is needed. This can be in the form of an equation, model, or computer simulation. This representation is no different than that typically used by engineering to predict a specific response for the system. The problem is that most of these representations provide a nominal or deterministic analysis of the response. This means that one set of inputs give one output. Variation is taken into account by trying different parameter values. The process can be inefficient. We need to transform this deterministic approach to a probabilistic approach.
This module deal with quantifying how well our processes meet customer specifications.
What is obtained as a result of applying DFSS in product design? First, we must begin with a representation of the system response. This can be based on the system block diagram, hierarchical representation, equations, computer simulation, etc. What we want to obtain is a description of the response statistical properties, the predicted probability of non-compliance, and an assessment of how each parameter contributes to the variation of the response. The Probability of NonCompliance and Contribution to Variation are two of the most critical results of a DFSS analysis. From the probabilistic analysis, the response variation can be expressed in the form of statistical parameters, histogram, or PDF. If spec limits are defined for the response, then they can be placed on the distribution and the PNC computed. Next, the variation of each system parameter can be correlated with the variation of the response to provide contribution information. This results information is use in the allocation of tolerances and definition of process capabilities.
This slide is perhaps TOO busy – I’m lost reading it. Any way to split it into 2 or 3???
Delete the heading “Defect Density Bounds”
DFSS for Software-Intensive Systems Neal Mackertich & Trindy LeForge Raytheon Company
Software Project Challenges Right Architecture? The Voice of the Customer Resource Availability Defect Management Product Cost What to Test? Schedule pressure Technical Performance
Process Integration IPDS provides an integrated set of best practices for the entire product development life cycle. through a just-in-time tailoring process. R6s is a business strategy for process improvement. It guides us to use CMMI and IPDS as tools to deliver value to customers and integrate industry best practices. CMMI provides guidance for creating, measuring,managing, and improving specific processes. Each plays an integral role in the success of programs, projects and organizations Programs integrate R6Sigma, IPDS, and CMMI into their plans
Design for Six Sigma DFSS Design for Six Sigma is a methodology used within IPDP to predict, manage and improve Producibility, Affordability , and Robust Performance for the benefit of our customers . In essence, DFSS is the application of R6s in product optimization Affordability Producibility Robust Performance
DFSS Enablement of Software Project Performance Program Execution Gates 5-11 Business Decision Gates 1-4 5 - System Integration, Verification and Validation 4 - Product Design and Development 1 - Capture/Proposal Development 7 8 10 9 = GATE Internal Preliminary Design Review Start-Up Review Internal System Functional Review Internal Critical Design Review Internal Test/Ship Readiness Review Internal Production Readiness Review 2 - Project Planning, Management and Control 1 2 3 4 Bid / Proposal Review 11 Test Optimization Architectural Evaluation VOC Modeling & Analysis Defect Prediction& Prevention Critical Chain Project Management Performance Modeling DFSS is the application of Six Sigma in product optimization Planning 6 - Production and Deployment Planning 7 - Operations and Support 3 - Requirements and Architecture Development 5 6
SLAMRAAM provides low-cost design alternatives Baseline elements with limited Reuse & interoperability Baseline elements that improve interoperability Availability of SoS elements has significant impacts on schedule VOICE OF THE CUSTOMER PRIORITIES4 COMPETITIVE EVALUATION HOWs DIRECTION OF IMPROVEMENT RELATIONSHIP MATRIX CORRELATION MATRIX HOW MUCHes RANKINGS / PRIORITIES
“ New & Nifty” Pen – QFD Example Design a Pen for Students Ages 5 - 18
“ If the system is going to be built by more than one person - and these days, what system isn’t - it is the architecture that lets them communicate and negotiate work assignments. If the requirements include goals for performance, security, reliability, or maintainability, then architecture is the design artifact that first expresses how the system will be built to achieve those goals.”
“ Evaluating Software Architectures- Methods & Case Studies” SEI Series in Software Engineering (Clements, Kazman & Klein)
“… it is possible to evaluate an architecture and its design decisions, to analyze decisions, in the context of the goals and requirements that are levied on systems that will be built from it.”
ATAM is a structured process in which the right questions are asked early to:
Discover risks -- alternatives that might create future problems
Discover sensitivity points -- alternatives for which a slight change makes a significant difference
Discover tradeoffs -- decisions affecting more than one attribute
DD(X) customer observation: “The ATAM report represents a very good application of the ATAM, given the relative newness of the methodology and without assistance from ATAM expertise. Many new risks were identified and documented and many more existing risks in the risk register were corroborated as a result. A variety of scenarios and threads were generated during the ATAM and over 130 sensitivity points were identified and mapped to risks. As a result of the ATAM tradeoff and sensitivity analyses, (there were) nine identified significant findings.”
Critical Chain Multi-Project Management was first developed by Dr. Eliyahu M. Goldratt founder of The Theory of Constraints
An established and practical project management technique companies have successfully used to reduce development cycle time
Based on statistical methodologies
Being used at Raytheon and across industry
Increasingly accepted by our customers
Multi-Tasking is an Insidious Generator of… When (if) we look at our “Resource Allocation” we typically discover high degrees of multi-tasking … … waste Why don’t we meet our schedule commitments?!? Resource 1 Resource 3 Time Resource 2
Effect of Multi-Tasking Expectations : Task A Task B Task C What happens: A B C A B C Lost Productivity due to context switching Should happen: Task B Task C Task A Priorities are established and followed
A prediction of the Response Statistical Properties
A prediction of the Probability of Non-Compliance
An assessment of the Contribution of Parameter Variation to the Response Variation
Results from Crystal Ball Monte Carlo SW A B C D E Y f (A, B, C, D, E, F,...,M) Parameters Response F G H I J K L M Function G-Sys. Losses -.45 A-Pavg .35 D-Ant. Eff, .35 F-Integ. Eff. .34 J-Rec. BW -.34 B-Ant. Gain .29 H-Tgt RCS .23 C-Ant. Aperture .21 K-Pulse Width -.19 M-Rec. Out SNR -.15 I-Noise Figure -.12 L-Rep. Freq. -.03 -1 -0.5 0 0.5 1 Measured by Rank Correlation Prob(LL<Y<UL) y , y , 1y , 2y PDF(Y)
Within a single development program it is difficult at best to:
Analyze the data
Determine root cause
Data does not give the complete picture until the development is over.
An effective defect prediction model provides the ability to mix actual and predictive performance to show what the picture will look like before you get there.
This enables us to make better decisions and take more effective preventive action.
Defect Model Previous Program/Release Standard Defect Containment Chart Model Raw Defect Counts Prorate the raw defect counts by the ratio of: SLOC_New/SLOC_Model Projected Raw Defect Counts For New Program New Program Defect Containment Current Raw Defect Counts Stage % Complete System Requirements Analysis A Software Requirements Analysis B Preliminary Design C Detailed Design D Code E Unit Test F Software Integration G Software Qualification Test H System Integration I System Test J Post System Test K Current SLOC Estimate = XXXXXXX Percent complete is supplied on an as-needed for analysis (minimum monthly). % should match EVMS. Current SLOC estimate is required for model calibration – should match tracking book The model assumption is that if a stage is XX% complete, then we have detected XX% of the defects “normally” found in that stage. Percent complete is used as a multiplier (1-%complete) for the raw defect counts as for each stage detected to calculate defect counts for defects that are not yet detected. New Program -Projected Defect Containment Sum of Current Raw Defect Counts Plus Raw Defects Not Yet Detected Model Charts Normalized By Size Defect Density Predictive Model Approach
Defect Density By Stage Detected and Originated
Defect density by stage uses total defects and source lines of code counts to assess the efficiency of catching and containing software defects in the various lifecycle stages.
Defect Density – Code Inspection Analysis Tips
Tips: The first level of understanding in analyzing Defect Density at Code Inspection is:
Analysis of each data point resulting from code peer review using the Code Inspection Calculator.
When number of defects is between the control limits, fix findings and pass to next stage . When number of defects is below the lower limit, unit(s) may need more inspection. When number of defects is above the upper limit, unit(s) may need rework and a repeated inspection.
N-way combinations provides reasonable, statistically balanced coverage to be augment with domain expertise.
CDM is more realistic than full/fractional experimental designs:
Compatible with constraints
Compatible with factors at different levels
Can account for previous test
Drastically reduces the total number of test cases when compared to all combinations.
Since generating test cases is very quick and simple, there are no major barriers to using CDM as part of the testing process.
Can be used in almost all phases of systems & subsystem testing.
IIS Satellite Ground Systems CDM application resulted in significant schedule (68%) and cost savings (67%).
A Testing Scenario 22 Test Cases 2 Way Combinations 1920 Test Cases All Combinations If Destination Format is GIF, then # colors cannot be 16 bit or 24 bit. Constraints Correct conversion (True or False) Outputs Source Format (GIF, JPG, TIFF, PNG) Dest. Format (GIF, JPG, TIFF, PNG) Size (Small, Med, Large) # colors (4 bit, 8 bit, 16 bit, 24 bit) Destination (Local drive, network drive) Windows Version (95, 98, NT, 2000, Me) Inputs Graphics manipulation function that converts from one format to another. Example Application Black-box type testing geared to functional requirements of an application. Definition