NASA Systems Engineering Handbook Rev1
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share

NASA Systems Engineering Handbook Rev1

  • 3,164 views
Uploaded on

This handbook consists of six core chapters: ...

This handbook consists of six core chapters:

(1) Systems Engineering fundamentals discussion,

(2) the NASA program/project life cycles,

(3) systems engineering processesto get from a concept to a design,

(4) systems engineering processes to get from a design to a final product,

(5) crosscutting management processes in systems engineering,and

(6) special topics relative to systems engineering.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
3,164
On Slideshare
3,159
From Embeds
5
Number of Embeds
2

Actions

Shares
Downloads
126
Comments
0
Likes
1

Embeds 5

http://www.slashdocs.com 3
http://www.linkedin.com 2

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. NASA/SP-2007-6105Rev1NASASystems EngineeringHandbook
  • 2. NASA STI Program … in ProfileSince its founding, the National Aeronautics and Space z Contractor Report: Scientific and technical findingsAdministration (NASA) has been dedicated to the ad- by NASA-sponsored contractors and grantees.vancement of aeronautics and space science. The NASA z Conference Publication: Collected papers from scien-Scientific and Technical Information (STI) program tific and technical conferences, symposia, seminars, orplays a key part in helping NASA maintain this impor- other meetings sponsored or co-sponsored by NASA.tant role. z Special Publication: Scientific, technical, or histor-The NASA STI program operates under the auspices of ical information from NASA programs, projects, andthe Agency Chief Information Officer. It collects, orga- missions, often concerned with subjects having sub-nizes, provides for archiving, and disseminates NASA’s stantial public interest.STI. The NASA STI program provides access to the z Technical Translation: English-language translationsNASA Aeronautics and Space Database and its public of foreign scientific and technical material pertinentinterface, the NASA technical report server, thus pro- to NASA’s mission.viding one of the largest collections of aeronautical and Specialized services also include creating custom the-space science STI in the world. Results are published in sauri, building customized databases, and organizingboth non-NASA channels and by NASA in the NASA and publishing research results.STI report series, which include the following reporttypes: For more information about the NASA STI program, seez Technical Publication: Reports of completed research the following: or a major significant phase of research that present the z Access the NASA STI program home page at results of NASA programs and include extensive data www.sti.nasa.gov or theoretical analysis. Includes compilations of sig- z E-mail your question via the Internet to nificant scientific and technical data and information help@sti.nasa.gov deemed to be of continuing reference value. NASA z Fax your question to the NASA STI help desk at counterpart of peer-reviewed formal professional pa- 301-621-0134 pers but has less stringent limitations on manuscript length and extent of graphic presentations. z Phone the NASA STI help desk at 301-621-0390z Technical Memorandum: Scientific and technical z Write to: findings that are preliminary or of specialized interest, NASA STI Help Desk e.g., quick release reports, working papers, and bibli- NASA Center for AeroSpace Information ographies that contain minimal annotation. Does not 7115 Standard Drive contain extensive analysis. Hanover, MD 21076-1320
  • 3. NASA/SP-2007-6105 Rev1Systems Engineering HandbookNational Aeronautics and Space AdministrationNASA HeadquartersWashington, D.C. 20546December 2007
  • 4. To request print or electronic copies or provide comments, contact the Office of the Chief Engineer via SP6105rev1SEHandbook@nasa.gov Electronic copies are also available from NASA Center for AeroSpace Information 7115 Standard Drive Hanover, MD 21076-1320 at http://ntrs.nasa.gov/
  • 5. Table of ContentsPreface ..............................................................................................................................................................xiiiAcknowledgments............................................................................................................................................xv1.0 Introduction ...............................................................................................................................................1 1.1 Purpose ....................................................................................................................................................................... 1 1.2 Scope and Depth ........................................................................................................................................................ 12.0 Fundamentals of Systems Engineering .....................................................................................................3 2.1 The Common Technical Processes and the SE Engine ......................................................................................... 4 2.2 An Overview of the SE Engine by Project Phase ................................................................................................... 6 2.3 Example of Using the SE Engine .............................................................................................................................. 7 2.3.1 Detailed Example ........................................................................................................................................... 8 2.3.2 Example Premise ............................................................................................................................................ 8 2.3.2.1 Example Phase A System Design Passes....................................................................................... 8 2.3.2.2 Example Product Realization Passes ........................................................................................... 12 2.3.2.3 Example Use of the SE Engine in Phases B Through D ............................................................ 14 2.3.2.4 Example Use of the SE Engine in Phases E and F ..................................................................... 14 2.4 Distinctions Between Product Verification and Product Validation ................................................................ 15 2.5 Cost Aspect of Systems Engineering ..................................................................................................................... 163.0 NASA Program/Project Life Cycle............................................................................................................ 19 3.1 Program Formulation.............................................................................................................................................. 19 3.2 Program Implementation ....................................................................................................................................... 21 3.3 Project Pre-Phase A: Concept Studies .................................................................................................................. 22 3.4 Project Phase A: Concept and Technology Development .................................................................................. 22 3.5 Project Phase B: Preliminary Design and Technology Completion ................................................................. 24 3.6 Project Phase C: Final Design and Fabrication.................................................................................................... 25 3.7 Project Phase D: System Assembly, Integration and Test, Launch .................................................................... 25 3.8 Project Phase E: Operations and Sustainment ..................................................................................................... 28 3.9 Project Phase F: Closeout ....................................................................................................................................... 28 3.10 Funding: The Budget Cycle..................................................................................................................................... 294.0 System Design ......................................................................................................................................... 31 4.1 Stakeholder Expectations Definition..................................................................................................................... 33 4.1.1 Process Description ..................................................................................................................................... 33 4.1.1.1 Inputs .............................................................................................................................................. 33 4.1.1.2 Process Activities ........................................................................................................................... 33 4.1.1.3 Outputs ........................................................................................................................................... 35 4.1.2 Stakeholder Expectations Definition Guidance ....................................................................................... 35 4.1.2.1 Concept of Operations .................................................................................................................. 35 4.2 Technical Requirements Definition....................................................................................................................... 40 4.2.1 Process Description ..................................................................................................................................... 40 4.2.1.1 Inputs .............................................................................................................................................. 41 4.2.1.2 Process Activities ........................................................................................................................... 41 4.2.1.3 Outputs ........................................................................................................................................... 41 4.2.2 Technical Requirements Definition Guidance ......................................................................................... 41 4.2.2.1 Types of Requirements.................................................................................................................. 41 NASA Systems Engineering Handbook  iii
  • 6. Table of Contents 4.2.2.2 Human Factors Engineering Requirements............................................................................... 45 4.2.2.3 Requirements Decomposition, Allocation, and Validation ..................................................... 45 4.2.2.4 Capturing Requirements and the Requirements Database ...................................................... 47 4.2.2.5 Technical Standards ...................................................................................................................... 47 4.3 Logical Decomposition ........................................................................................................................................... 49 4.3.1 Process Description .................................................................................................................................... 49 4.3.1.1 Inputs .............................................................................................................................................. 49 4.3.1.2 Process Activities ........................................................................................................................... 49 4.3.1.3 Outputs ........................................................................................................................................... 51 4.3.2 Logical Decomposition Guidance ............................................................................................................. 52 4.3.2.1 Product Breakdown Structure ..................................................................................................... 52 4.3.2.2 Functional Analysis Techniques .................................................................................................. 52 4.4 Design Solution Definition ....................................................................................................................................... 55 4.4.1 Process Description ........................................................................................................................................ 55 4.4.1.1 Inputs ................................................................................................................................................ 55 4.4.1.2 Process Activities ............................................................................................................................. 56 4.4.1.3 Outputs ............................................................................................................................................. 61 4.4.2 Design Solution Definition Guidance.......................................................................................................... 62 4.4.2.1 Technology Assessment ................................................................................................................ 62 4.4.2.2 Integrating Engineering Specialties into the Systems Engineering Process .......................... 625.0 Product Realization................................................................................................................................. 71 5.1 Product Implementation......................................................................................................................................... 73 5.1.1 Process Description ..................................................................................................................................... 73 5.1.1.1 Inputs .............................................................................................................................................. 73 5.1.1.2 Process Activities .......................................................................................................................... 74 5.1.1.3 Outputs ........................................................................................................................................... 75 5.1.2 Product Implementation Guidance ........................................................................................................... 76 5.1.2.1 Buying Off-the-Shelf Products .................................................................................................... 76 5.1.2.2 Heritage........................................................................................................................................... 76 5.2 Product Integration ................................................................................................................................................. 78 5.2.1 Process Description ..................................................................................................................................... 78 5.2.1.1 Inputs ............................................................................................................................................. 79 5.2.1.2 Process Activities ........................................................................................................................... 79 5.2.1.3 Outputs ........................................................................................................................................... 79 5.2.2 Product Integration Guidance ................................................................................................................... 80 5.2.2.1 Integration Strategy ....................................................................................................................... 80 5.2.2.2 Relationship to Product Implementation .................................................................................. 80 5.2.2.3 Product/Interface Integration Support ....................................................................................... 80 5.2.2.4 Product Integration of the Design Solution ............................................................................... 81 5.2.2.5 Interface Management .................................................................................................................. 81 5.2.2.6 Compatibility Analysis.................................................................................................................. 81 5.2.2.7 Interface Management Tasks........................................................................................................ 81 5.3 Product Verification ............................................................................................................................................... 83 5.3.1 Process Description ..................................................................................................................................... 83 5.3.1.1 Inputs .............................................................................................................................................. 83 5.3.1.2 Process Activities ........................................................................................................................... 84 5.3.1.3 Outputs ........................................................................................................................................... 89 5.3.2 Product Verification Guidance................................................................................................................... 89 5.3.2.1 Verification Program ..................................................................................................................... 89 5.3.2.2 Verification in the Life Cycle ........................................................................................................ 89 5.3.2.3 Verification Procedures ................................................................................................................ 92iv  NASA Systems Engineering Handbook
  • 7. Table of Contents 5.3.2.4 Verification Reports ...................................................................................................................... 93 5.3.2.5 End-to-End System Testing ......................................................................................................... 93 5.3.2.6 Modeling and Simulation ............................................................................................................. 96 5.3.2.7 Hardware-in-the-Loop ................................................................................................................. 96 5.4 Product Validation ................................................................................................................................................... 98 5.4.1 Process Description ..................................................................................................................................... 98 5.4.1.1 Inputs .............................................................................................................................................. 98 5.4.1.2 Process Activities ........................................................................................................................... 99 5.4.1.3 Outputs ......................................................................................................................................... 104 5.4.2 Product Validation Guidance ................................................................................................................... 104 5.4.2.1 Modeling and Simulation ........................................................................................................... 104 5.4.2.2 Software ........................................................................................................................................ 104 5.5 Product Transition ................................................................................................................................................ 106 5.5.1 Process Description ................................................................................................................................... 106 5.5.1.1 Inputs ............................................................................................................................................ 106 5.5.1.2 Process Activities ......................................................................................................................... 107 5.5.1.3 Outputs ......................................................................................................................................... 109 5.5.2 Product Transition Guidance ................................................................................................................... 110 5.5.2.1 Additional Product Transition Input Considerations ............................................................ 110 5.5.2.2 After Product Transition to the End User—What Next? ....................................................... 1106.0 Crosscutting Technical Management .................................................................................................. 111 6.1 Technical Planning ................................................................................................................................................ 112 6.1.1 Process Description ................................................................................................................................... 112 6.1.1.1 Inputs ............................................................................................................................................ 112 6.1.1.2 Process Activities ......................................................................................................................... 113 6.1.1.3 Outputs ......................................................................................................................................... 122 6.1.2 Technical Planning Guidance .................................................................................................................. 122 6.1.2.1 Work Breakdown Structure........................................................................................................ 122 6.1.2.2 Cost Definition and Modeling ................................................................................................... 125 6.1.2.3 Lessons Learned .......................................................................................................................... 129 6.2 Requirements Management.................................................................................................................................. 131 6.2.1 Process Description ................................................................................................................................... 131 6.2.1.1 Inputs ............................................................................................................................................ 131 6.2.1.2 Process Activities ......................................................................................................................... 132 6.2.1.3 Outputs ......................................................................................................................................... 134 6.2.2 Requirements Management Guidance .................................................................................................... 134 6.2.2.1 Requirements Management Plan .............................................................................................. 134 6.2.2.2 Requirements Management Tools ............................................................................................. 135 6.3 Interface Management .......................................................................................................................................... 136 6.3.1 Process Description ................................................................................................................................... 136 6.3.1.1 Inputs ............................................................................................................................................ 136 6.3.1.2 Process Activities ......................................................................................................................... 136 6.3.1.3 Outputs ......................................................................................................................................... 137 6.3.2 Interface Management Guidance............................................................................................................. 137 6.3.2.1 Interface Requirements Document ........................................................................................... 137 6.3.2.2 Interface Control Document or Interface Control Drawing ................................................. 137 6.3.2.3 Interface Definition Document ................................................................................................ 138 6.3.2.4 Interface Control Plan................................................................................................................. 138 6.4 Technical Risk Management................................................................................................................................. 139 6.4.1 Process Description ................................................................................................................................... 140 6.4.1.1 Inputs ............................................................................................................................................ 140 NASA Systems Engineering Handbook  v
  • 8. Table of Contents 6.4.1.2 Process Activities ......................................................................................................................... 140 6.4.1.3 Outputs ......................................................................................................................................... 141 6.4.2 Technical Risk Management Guidance................................................................................................... 141 6.4.2.1 Role of Continuous Risk Management in Technical Risk Management ............................. 142 6.4.2.2 The Interface Between CRM and Risk-Informed Decision Analysis ................................... 142 6.4.2.3 Selection and Application of Appropriate Risk Methods ....................................................... 143 6.5 Configuration Management ................................................................................................................................ 151 6.5.1 Process Description ................................................................................................................................... 151 6.5.1.1 Inputs ............................................................................................................................................ 151 6.5.1.2 Process Activities ......................................................................................................................... 151 6.5.1.3 Outputs ......................................................................................................................................... 156 6.5.2 CM Guidance ............................................................................................................................................. 156 6.5.2.1 What Is the Impact of Not Doing CM? .................................................................................... 156 6.5.2.2 When Is It Acceptable to Use Redline Drawings? ................................................................... 157 6.6 Technical Data Management ................................................................................................................................ 158 6.6.1 Process Description ................................................................................................................................... 158 6.6.1.1 Inputs ............................................................................................................................................ 158 6.6.1.2 Process Activities ........................................................................................................................ 158 6.6.1.3 Outputs ......................................................................................................................................... 162 6.6.2 Technical Data Management Guidance .................................................................................................. 162 6.6.2.1 Data Security and ITAR .............................................................................................................. 162 6.7 Technical Assessment ............................................................................................................................................ 166 6.7.1 Process Description ................................................................................................................................... 166 6.7.1.1 Inputs ............................................................................................................................................ 166 6.7.1.2 Process Activities ......................................................................................................................... 166 6.7.1.3 Outputs ......................................................................................................................................... 167 6.7.2 Technical Assessment Guidance .............................................................................................................. 168 6.7.2.1 Reviews, Audits, and Key Decision Points .............................................................................. 168 6.7.2.2 Status Reporting and Assessment.............................................................................................. 190 6.8 Decision Analysis................................................................................................................................................... 197 6.8.1 Process Description ................................................................................................................................... 197 6.8.1.1 Inputs ............................................................................................................................................ 198 6.8.1.2 Process Activities ......................................................................................................................... 199 6.8.1.3 Outputs ......................................................................................................................................... 202 6.8.2 Decision Analysis Guidance ..................................................................................................................... 203 6.8.2.1 Systems Analysis, Simulation, and Performance..................................................................... 203 6.8.2.2 Trade Studies ................................................................................................................................ 205 6.8.2.3 Cost-Benefit Analysis .................................................................................................................. 209 6.8.2.4 Influence Diagrams ..................................................................................................................... 210 6.8.2.5 Decision Trees .............................................................................................................................. 210 6.8.2.6 Multi-Criteria Decision Analysis .............................................................................................. 211 6.8.2.7 Utility Analysis............................................................................................................................. 212 6.8.2.8 Risk-Informed Decision Analysis Process Example ............................................................... 2137.0 Special Topics ........................................................................................................................................ 217 7.1 Engineering with Contracts ................................................................................................................................. 217 7.1.1 Introduction, Purpose, and Scope ........................................................................................................... 217 7.1.2 Acquisition Strategy................................................................................................................................... 217 7.1.2.1 Develop an Acquisition Strategy ............................................................................................... 218 7.1.2.2 Acquisition Life Cycle ................................................................................................................. 218 7.1.2.3 NASA Responsibility for Systems Engineering ....................................................................... 218 7.1.3 Prior to Contract Award ........................................................................................................................... 219vi  NASA Systems Engineering Handbook
  • 9. Table of Contents 7.1.3.1 Acquisition Planning................................................................................................................... 219 7.1.3.2 Develop the Statement of Work ................................................................................................. 223 7.1.3.3 Task Order Contracts .................................................................................................................. 225 7.1.3.4 Surveillance Plan ......................................................................................................................... 225 7.1.3.5 Writing Proposal Instructions and Evaluation Criteria ......................................................... 226 7.1.3.6 Selection of COTS Products ...................................................................................................... 226 7.1.3.7 Acquisition-Unique Risks .......................................................................................................... 227 7.1.4 During Contract Performance ................................................................................................................. 227 7.1.4.1 Performing Technical Surveillance .......................................................................................... 227 7.1.4.2 Evaluating Work Products.......................................................................................................... 229 7.1.4.3 Issues with Contract-Subcontract Arrangements ................................................................... 229 7.1.5 Contract Completion ............................................................................................................................... 230 7.1.5.1 Acceptance of Final Deliverables............................................................................................... 230 7.1.5.2 Transition Management.............................................................................................................. 231 7.1.5.3 Transition to Operations and Support ...................................................................................... 232 7.1.5.4 Decommissioning and Disposal ................................................................................................ 233 7.1.5.5 Final Evaluation of Contractor Performance........................................................................... 2337.2 Integrated Design Facilities .................................................................................................................................. 234 7.2.1 Introduction .............................................................................................................................................. 234 7.2.2 CACE Overview and Importance............................................................................................................ 234 7.2.3 CACE Purpose and Benefits..................................................................................................................... 235 7.2.4 CACE Staffing............................................................................................................................................. 235 7.2.5 CACE Process............................................................................................................................................. 236 7.2.5.1 Planning and Preparation........................................................................................................... 236 7.2.5.2 Activity Execution Phase ............................................................................................................ 236 7.2.5.3 Activity Wrap-Up ....................................................................................................................... 237 7.2.6 CACE Engineering Tools and Techniques ............................................................................................ 237 7.2.7 CACE Facility, Information Infrastructure, and Staffing ..................................................................... 238 7.2.7.1 Facility ........................................................................................................................................... 238 7.2.7.2 Information Infrastructure......................................................................................................... 238 7.2.7.3 Facility Support Staff Responsibilities....................................................................................... 239 7.2.8 CACE Products ......................................................................................................................................... 239 7.2.9 CACE Best Practices.................................................................................................................................. 239 7.2.9.1 People ............................................................................................................................................ 240 7.2.9.2 Process and Tools ........................................................................................................................ 240 7.2.9.3 Facility ........................................................................................................................................... 2407.3 Selecting Engineering Design Tools ..................................................................................................................... 242 7.3.1 Program and Project Considerations ...................................................................................................... 242 7.3.2 Policy and Processes .................................................................................................................................. 242 7.3.3 Collaboration.............................................................................................................................................. 242 7.3.4 Design Standards ....................................................................................................................................... 243 7.3.5 Existing IT Architecture............................................................................................................................ 243 7.3.6 Tool Interfaces ............................................................................................................................................ 243 7.3.7 Interoperability and Data Formats .......................................................................................................... 243 7.3.8 Backward Compatibility ........................................................................................................................... 244 7.3.9 Platform....................................................................................................................................................... 244 7.3.10 Tool Configuration Control...................................................................................................................... 244 7.3.11 Security/Access Control ............................................................................................................................ 244 7.3.12 Training ....................................................................................................................................................... 244 7.3.13 Licenses ....................................................................................................................................................... 244 7.3.14 Stability of Vendor and Customer Support ............................................................................................ 2447.4 Human Factors Engineering ................................................................................................................................ 246 NASA Systems Engineering Handbook  vii
  • 10. Table of Contents 7.4.1 Basic HF Model .......................................................................................................................................... 247 7.4.2 HF Analysis and Evaluation Techniques ................................................................................................ 247 7.5 Environmental, Nuclear Safety, Planetary Protection, and Asset Protection Policy Compliance .............. 256 7.5.1 NEPA and EO 12114 ................................................................................................................................. 256 7.5.1.1 National Environmental Policy Act .......................................................................................... 256 7.5.1.2 EO 12114 Environmental Effects Abroad of Major Federal Actions ................................... 257 7.5.2 PD/NSC-25 ................................................................................................................................................. 257 7.5.3 Planetary Protection .................................................................................................................................. 258 7.5.4 Space Asset Protection .............................................................................................................................. 260 7.5.4.1 Protection Policy.......................................................................................................................... 260 7.5.4.2 Goal ............................................................................................................................................... 260 7.5.4.3 Scoping.......................................................................................................................................... 260 7.5.4.4 Protection Planning .................................................................................................................... 260 7.6 Use of Metric System ............................................................................................................................................ 261Appendix A: Acronyms................................................................................................................................. 263Appendix B: Glossary ................................................................................................................................... 266Appendix C: How to Write a Good Requirement ........................................................................................ 279Appendix D: Requirements Verification Matrix ......................................................................................... 282Appendix E: Creating the Validation Plan (Including Validation Requirements Matrix) ........................ 284Appendix F: Functional, Timing, and State Analysis ................................................................................. 285Appendix G: Technology Assessment/Insertion ........................................................................................ 293Appendix H: Integration Plan Outline ........................................................................................................ 299Appendix I: Verification and Validation Plan Sample Outline .................................................................. 301Appendix J: SEMP Content Outline ............................................................................................................. 303Appendix K: Plans ........................................................................................................................................ 308Appendix L: Interface Requirements Document Outline .......................................................................... 309Appendix M: CM Plan Outline ..................................................................................................................... 311Appendix N: Guidance on Technical Peer Reviews/Inspections .............................................................. 312Appendix O: Tradeoff Examples .................................................................................................................. 316Appendix P: SOW Review Checklist ............................................................................................................ 317Appendix Q: Project Protection Plan Outline ............................................................................................ 321References ...................................................................................................................................................... 323Bibliography .................................................................................................................................................. 327Index ............................................................................................................................................................... 332viii  NASA Systems Engineering Handbook
  • 11. Table of ContentsFigures 2.0-1 SE in context of overall project management ...................................................................................................... 4 2.1-1 The systems engineering engine............................................................................................................................ 5 2.2-1 A miniaturized conceptualization of the poster-size NASA project life-cycle process flow for flight and ground systems accompanying this handbook ................................................................................. 6 2.3-1 SE engine tracking icon .......................................................................................................................................... 8 2.3-2 Product hierarchy, tier 1: first pass through the SE engine................................................................................ 9 2.3-3 Product hierarchy, tier 2: external tank .............................................................................................................. 10 2.3-4 Product hierarchy, tier 2: orbiter ......................................................................................................................... 10 2.3-5 Product hierarchy, tier 3: avionics system.......................................................................................................... 11 2.3-6 Product hierarchy: complete pass through system design processes side of the SE engine ........................ 11 2.3-7 Model of typical activities during operational phase (Phase E) of a product ............................................... 14 2.3-8 New products or upgrades reentering the SE engine ....................................................................................... 15 2.5-1 The enveloping surface of nondominated designs............................................................................................ 16 2.5-2 Estimates of outcomes to be obtained from several design concepts including uncertainty...................... 17 3.0-1 NASA program life cycle ...................................................................................................................................... 20 3.0-2 NASA project life cycle ......................................................................................................................................... 20 3.10-1 Typical NASA budget cycle .............................................................................................................................. 29 4.0-1 Interrelationships among the system design processes .................................................................................... 31 4.1-1 Stakeholder Expectations Definition Process.................................................................................................... 33 4.1-2 Product flow for stakeholder expectations ........................................................................................................ 34 4.1-3 Typical ConOps development for a science mission ........................................................................................ 36 4.1-4 Example of an associated end-to-end operational architecture ..................................................................... 36 4.1-5a Example of a lunar sortie timeline developed early in the life cycle............................................................. 37 4.1-5b Example of a lunar sortie DRM early in the life cycle .................................................................................... 37 4.1-6 Example of a more detailed, integrated timeline later in the life cycle for a science mission ..................... 38 4.2-1 Technical Requirements Definition Process ...................................................................................................... 40 4.2-2 Characteristics of functional, operational, reliability, safety, and specialty requirements........................... 43 4.2-3 The flowdown of requirements............................................................................................................................ 46 4.2-4 Allocation and flowdown of science pointing requirements........................................................................... 47 4.3-1 Logical Decomposition Process .......................................................................................................................... 49 4.3-2 Example of a PBS................................................................................................................................................... 52 4.3-3 Example of a functional flow block diagram ..................................................................................................... 53 4.3-4 Example of an N2 diagram .................................................................................................................................. 54 4.4-1 Design Solution Definition Process .................................................................................................................... 55 4.4-2 The doctrine of successive refinement ................................................................................................................. 56 4.4-3 A quantitative objective function, dependent on life-cycle cost and all aspects of effectiveness ................ 58 5.0-1 Product realization ................................................................................................................................................ 71 5.1-1 Product Implementation Process ........................................................................................................................ 73 5.2-1 Product Integration Process ................................................................................................................................ 78 5.3-1 Product Verification Process ............................................................................................................................... 84 5.3-2 Bottom-up realization process ............................................................................................................................ 90 5.3-3 Example of end-to-end data flow for a scientific satellite mission ................................................................. 94 5.4-1 Product Validation Process .................................................................................................................................. 99 5.5-1 Product Transition Process ................................................................................................................................ 106 6.1-1 Technical Planning Process ............................................................................................................................... 112 6.1-2 Activity-on-arrow and precedence diagrams for network schedules........................................................... 116 6.1-3 Gantt chart ........................................................................................................................................................... 118 6.1-4 Relationship between a system, a PBS, and a WBS ........................................................................................ 123 6.1-5 Examples of WBS development errors ............................................................................................................. 125 6.2-1 Requirements Management Process................................................................................................................. 131 6.3-1 Interface Management Process.......................................................................................................................... 136 NASA Systems Engineering Handbook  ix
  • 12. Table of Contents 6.4-1 Technical Risk Management Process................................................................................................................ 140 6.4-2 Scenario-based modeling of hazards ................................................................................................................ 141 6.4-3 Risk as a set of triplets ........................................................................................................................................ 141 6.4-4 Continuous risk management ........................................................................................................................... 142 6.4-5 The interface between CRM and risk-informed decision analysis ............................................................... 143 6.4-6 Risk analysis of decision alternatives ................................................................................................................ 144 6.4-7 Risk matrix ........................................................................................................................................................... 145 6.4-8 Example of a fault tree ........................................................................................................................................ 146 6.4-9 Deliberation ......................................................................................................................................................... 147 6.4-10 Performance monitoring and control of deviations ..................................................................................... 149 6.4-11 Margin management method .......................................................................................................................... 150 6.5-1 CM Process .......................................................................................................................................................... 151 6.5-2 Five elements of configuration management .................................................................................................. 152 6.5-3 Evolution of technical baseline .......................................................................................................................... 153 6.5-4 Typical change control process.......................................................................................................................... 155 6.6-1 Technical Data Management Process ............................................................................................................... 158 6.7-1 Technical Assessment Process ........................................................................................................................... 166 6.7-2 Planning and status reportingfeedback loop ................................................................................................... 167 6.7-3 Cost and schedule variances .............................................................................................................................. 190 6.7-4 Relationships of MOEs, MOPs,and TPMs....................................................................................................... 192 6.7-5 Use of the planned profile method for the weight TPM with rebaseline in Chandra Project .................. 194 6.7-6 Use of the margin management method for the mass TPM in Sojourner .................................................. 194 6.8-1 Decision Analysis Process .................................................................................................................................. 198 6.8-2 Example of a decision matrix ........................................................................................................................... 201 6.8-3 Systems analysis across the life cycle ................................................................................................................ 203 6.8-4 Simulation model analysis techniques ............................................................................................................. 204 6.8-5 Trade study process ............................................................................................................................................. 205 6.8-6 Influence diagrams .............................................................................................................................................. 210 6.8-7 Decision tree ........................................................................................................................................................ 211 6.8-8 Utility function for a “volume” performance measure................................................................................... 213 6.8-9 Risk-informed Decision Analysis Process ....................................................................................................... 214 6.8-10 Example of an objectives hierarchy ................................................................................................................ 215 7.1-1 Acquisition life cycle .......................................................................................................................................... 218 7.1-2 Contract requirements development process .................................................................................................. 223 7.2-1 CACE people/process/tools/facility paradigm ................................................................................................ 234 7.4-1 Human factors interaction model ..................................................................................................................... 247 7.4-2 HF engineering process and its links to the NASA program/project life cycle .......................................... 248 F-1 FFBD flowdown ..................................................................................................................................................... 286 F-2 FFBD: example 1 .................................................................................................................................................... 287 F-3 FFBD showing additional control constructs: example 2 ................................................................................. 287 F-4 Enhanced FFBD: example 3.................................................................................................................................. 288 F-5 Requirements allocation sheet.............................................................................................................................. 289 F-6 N2 diagram for orbital equipment ...................................................................................................................... 289 F-7 Timing diagram example ...................................................................................................................................... 290 F-8 Slew command status state diagram .................................................................................................................... 291 G-1 PBS example ........................................................................................................................................................... 294 G-2 Technology assessment process........................................................................................................................... 295 G-3 Architectural studies and technology development ......................................................................................... 296 G-4 Technology readiness levels ................................................................................................................................. 296 G-5 The TMA thought process ................................................................................................................................... 297 G-6 TRL assessment matrix......................................................................................................................................... 298 N-1 The peer review/inspection process.................................................................................................................... 312x  NASA Systems Engineering Handbook
  • 13. Table of Contents N-2 Peer reviews/inspections quick reference guide ............................................................................................... 315Tables 2.3-1 Project Life-Cycle Phases ....................................................................................................................................... 7 4.1-1 Typical Operational Phases for a NASA Mission ............................................................................................. 39 4.2-1 Benefits of Well-Written Requirements ............................................................................................................. 42 4.2-2 Requirements Metadata ....................................................................................................................................... 48 4.4-1 ILS Technical Disciplines ...................................................................................................................................... 66 6.6-1 Technical Data Tasks .......................................................................................................................................... 163 6.7-1 Program Technical Reviews ............................................................................................................................... 170 6.7-2 P/SRR Entrance and Success Criteria ............................................................................................................... 171 6.7-3 P/SDR Entrance and Success Criteria .............................................................................................................. 172 6.7-4 MCR Entrance and Success Criteria ................................................................................................................. 173 6.7-5 SRR Entrance and Success Criteria ................................................................................................................... 174 6.7-6 MDR Entrance and Success Criteria ................................................................................................................ 175 6.7-7 SDR Entrance and Success Criteria .................................................................................................................. 176 6.7-8 PDR Entrance and Success Criteria .................................................................................................................. 177 6.7-9 CDR Entrance and Success Criteria ................................................................................................................. 178 6.7-10 PRR Entrance and Success Criteria ................................................................................................................ 179 6.7-11 SIR Entrance and Success Criteria .................................................................................................................. 180 6.7-12 TRR Entrance and Success Criteria ................................................................................................................ 181 6.7-13 SAR Entrance and Success Criteria ................................................................................................................ 182 6.7-14 ORR Entrance and Success Criteria .............................................................................................................. 183 6.7-15 FRR Entrance and Success Criteria ............................................................................................................... 184 6.7-16 PLAR Entrance and Success Criteria ............................................................................................................ 185 6.7-17 CERR Entrance and Success Criteria ............................................................................................................. 186 6.7-18 PFAR Entrance and Success Criteria .............................................................................................................. 186 6.7-19 DR Entrance and Success Criteria .................................................................................................................. 187 6.7-20 Functional and Physical Configuration Audits ............................................................................................ 189 6.7-21 Systems Engineering Process Metrics ............................................................................................................ 196 6.8-1 Consequence Table ............................................................................................................................................. 199 6.8-2 Typical Information to Capture in a Decision Report .................................................................................. 202 7.1-1 Applying the Technical Processes on Contract ............................................................................................... 220 7.1-2 Steps in the Requirements Development Process .......................................................................................... 224 7.1-3 Proposal Evaluation Criteria ............................................................................................................................. 227 7.1-4 Risks in Acquisition ............................................................................................................................................ 228 7.1-5 Typical Work Product Documents ................................................................................................................... 230 7.1-6 Contract-Subcontract Issues.............................................................................................................................. 231 7.4-1 Human and Organizational Analysis Techniques ......................................................................................... 249 7.5-1 Planetary Protection Mission Categories ......................................................................................................... 259 7.5-2 Summarized Planetary Protection Requirements .......................................................................................... 259 D-1 Requirements Verification Matrix ...................................................................................................................... 283 E-1 Validation Requirements Matrix ......................................................................................................................... 284 G-1 Products Provided by the TA as a Function of Program/Project Phase ........................................................ 294 H-1 Integration Plan Contents .................................................................................................................................... 300 M-1 CM Plan Outline .................................................................................................................................................. 311 O-1 Typical Tradeoffs for Space Systems ................................................................................................................... 316 O-2 Typical Tradeoffs in the Acquisition Process..................................................................................................... 316 O-3 Typical Tradeoffs Throughout the Project Life Cycle ....................................................................................... 316 NASA Systems Engineering Handbook  xi
  • 14. Table of ContentsBoxes System Cost, Effectiveness, and Cost-Effectiveness..................................................................................................... 16 The Systems Engineer’s Dilemma .................................................................................................................................. 17 Program Formulation ...................................................................................................................................................... 21 Program Implementation ................................................................................................................................................ 21 Pre-Phase A: Concept Studies ........................................................................................................................................ 22 Phase A: Concept and Technology Development........................................................................................................ 23 Phase B: Preliminary Design and Technology Completion ....................................................................................... 24 Phase C: Final Design and Fabrication ......................................................................................................................... 26 Phase D: System Assembly, Integration and Test, Launch.......................................................................................... 27 Phase E: Operations and Sustainment........................................................................................................................... 28 Phase F: Closeout ............................................................................................................................................................. 28 System Design Keys ......................................................................................................................................................... 32 Example of Functional and Performance Requirements ............................................................................................ 43 Rationale ............................................................................................................................................................................ 48 DOD Architecture Framework ...................................................................................................................................... 51 Prototypes ......................................................................................................................................................................... 67 Product Realization Keys ................................................................................................................................................ 72 Differences Between Verification and Validation Testing........................................................................................... 83 Types of Testing ................................................................................................................................................................ 85 Types of Verification ........................................................................................................................................................ 86 Differences Between Verification and Validation Testing........................................................................................... 98 Types of Validation......................................................................................................................................................... 100 Examples of Enabling Products and Support Resources for Preparing to Conduct Validation .......................... 102 Model Verification and Validation ............................................................................................................................... 104 Crosscutting Technical Management Keys ................................................................................................................. 111 Gantt Chart Features...................................................................................................................................................... 117 WBS Hierarchies for Systems ....................................................................................................................................... 126 Definitions ....................................................................................................................................................................... 132 Typical Interface Management Checklist .................................................................................................................... 138 Key Concepts in Technical Risk Management .......................................................................................................... 139 Example Sources of Risk ............................................................................................................................................... 145 Limitations of Risk Matrices ......................................................................................................................................... 145 Types of Configuration Change Management Changes ........................................................................................... 154 Warning Signs/Red Flags (How Do You Know When You’re in Trouble?) ............................................................ 156 Redlines Were identified as One of the Major Causes of the NOAA N-Prime Mishap ....................................... 157 Inappropriate Uses of Technical Data.......................................................................................................................... 160 Data Collection Checklist ............................................................................................................................................. 162 Termination Review ....................................................................................................................................................... 169 Analyzing the Estimate at Completion........................................................................................................................ 191 Examples of Technical Performance Measures ......................................................................................................... 193 An Example of a Trade Tree for a Mars Rover ........................................................................................................... 207 Trade Study Reports....................................................................................................................................................... 208 Solicitations ..................................................................................................................................................................... 219 Source Evaluation Board ............................................................................................................................................... 226 Context Diagrams .......................................................................................................................................................... 292xii  NASA Systems Engineering Handbook
  • 15. PrefaceSince the writing of NASA/SP-6105 in 1995, systems bottom-up infusion of guidance from the NASA prac-engineering at the National Aeronautics and Space Ad- titioners in the field. The approach provided the oppor-ministration (NASA), within national and international tunity to obtain best practices from across NASA andstandard bodies, and as a discipline has undergone rapid bridge the information to the established NASA sys-evolution. Changes include implementing standards tems engineering process. The attempt is to commu-in the International Organization for Standardization nicate principles of good practice as well as alternative(ISO) 9000, the use of Carnegie Mellon Software Engi- approaches rather than specify a particular way to ac-neering Institute’s Capability Maturity Model® Integra- complish a task. The result embodied in this handbook istion (CMMI®) to improve development and delivery of a top-level implementation approach on the practice ofproducts, and the impacts of mission failures. Lessons systems engineering unique to NASA. The material forlearned on systems engineering were documented in re- updating this handbook was drawn from many differentports such as those by the NASA Integrated Action Team sources, including NASA procedural requirements, field(NIAT), the Columbia Accident Investigation Board center systems engineering handbooks and processes, as(CAIB), and the follow-on Diaz Report. Out of these well as non-NASA systems engineering textbooks andefforts came the NASA Office of the Chief Engineer guides.(OCE) initiative to improve the overall Agency systems This handbook consists of six core chapters: (1) systemsengineering infrastructure and capability for the efficient engineering fundamentals discussion, (2) the NASAand effective engineering of NASA systems, to produce program/project life cycles, (3) systems engineering pro-quality products, and to achieve mission success. In ad- cesses to get from a concept to a design, (4) systems engi-dition, Agency policy and requirements for systems en- neering processes to get from a design to a final product,gineering have been established. This handbook update (5) crosscutting management processes in systems en-is a part of the OCE-sponsored Agencywide systems en- gineering, and (6) special topics relative to systems en-gineering initiative. gineering. These core chapters are supplemented by ap- pendices that provide outlines, examples, and furtherIn 1995, SP-6105 was initially published to bring the information to illustrate topics in the core chapters. Thefundamental concepts and techniques of systems engi- handbook makes extensive use of boxes and figures toneering to NASA personnel in a way that recognizes the define, refine, illustrate, and extend concepts in the corenature of NASA systems and the NASA environment. chapters without diverting the reader from the main in-This revision of SP-6105 maintains that original philos- formation.ophy while updating the Agency’s systems engineeringbody of knowledge, providing guidance for insight into The handbook provides top-level guidelines for goodcurrent best Agency practices, and aligning the hand- systems engineering practices; it is not intended in anybook with the new Agency systems engineering policy. way to be a directive.The update of this handbook was twofold: a top-down NASA/SP-2007-6105 Rev1 supersedes SP-6105, datedcompatibility with higher level Agency policy and a June 1995. NASA Systems Engineering Handbook  xiii
  • 16. AcknowledgmentsPrimary points of contact: Stephen J. Kapurch, Office Amy Epps, NASA/Marshall Space Flight Center of the Chief Engineer, NASA Headquarters, and Neil E. Chester Everline, NASA/Jet Propulsion Laboratory Rainwater, Marshall Space Flight Center. Karen Fashimpaur, Arctic Slope Regional Corporation  ◆The following individuals are recognized as contributing Orlando Figueroa, NASA/Goddard Space Flight Center ■practitioners to the content of this handbook revision: Stanley Fishkind, NASA/Headquarters ■■ Core Team Member (or Representative) from Center, Brad Flick, NASA/Dryden Flight Research Center ■ Directorate, or Office Marton Forkosh, NASA/Glenn Research Center ■◆ Integration Team Member Dan Freund, NASA/Johnson Space Center • Subject Matter Expert Team Champion Greg Galbreath, NASA/Johnson Space Center  Subject Matter Expert Louie Galland, NASA/Langley Research Center Arden Acord, NASA/Jet Propulsion Laboratory  Yuri Gawdiak, NASA/Headquarters ■• Danette Allen, NASA/Langley Research Center  Theresa Gibson, NASA/Glenn Research Center Deborah Amato, NASA/Goddard Space Flight Center  • Ronnie Gillian, NASA/Langley Research Center Jim Andary, NASA/Goddard Space Flight Center  ◆ Julius Giriunas, NASA/Glenn Research Center Tim Beard, NASA/Ames Research Center  Ed Gollop, NASA/Marshall Space Flight Center Jim Bilbro, NASA/Marshall Space Flight Center  Lee Graham, NASA/Johnson Space Center Mike Blythe, NASA/Headquarters ■ Larry Green, NASA/Langley Research Center Linda Bromley, NASA/Johnson Space Center ◆• ■  Owen Greulich, NASA/Headquarters ■Dave Brown, Defense Acquisition University  Ben Hanel, NASA/Ames Research Center John Brunson, NASA/Marshall Space Flight Center  • Gena Henderson, NASA/Kennedy Space Center  •Joe Burt, NASA/Goddard Space Flight Center  Amy Hemken, NASA/Marshall Space Flight Center Glenn Campbell, NASA/Headquarters  Bob Hennessy, NASA/NASA Engineering and SafetyJoyce Carpenter, NASA/Johnson Space Center  • Center Keith Chamberlin, NASA/Goddard Space Flight Center  Ellen Herring, NASA/Goddard Space Flight Center  •Peggy Chun, NASA/NASA Engineering and Safety Renee Hugger, NASA/Johnson Space Center  Center ◆• ■  Brian Hughitt, NASA/Headquarters Cindy Coker, NASA/Marshall Space Flight Center  Eric Isaac, NASA/Goddard Space Flight Center ■Nita Congress, Graphic Designer ◆ Tom Jacks, NASA/Stennis Space Center Catharine Conley, NASA/Headquarters  Ken Johnson, NASA/NASA Engineering and SafetyShelley Delay, NASA/Marshall Space Flight Center  Center Rebecca Deschamp, NASA/Stennis Space Center  Ross Jones, NASA/Jet Propulsion Laboratory ■Homayoon Dezfuli, NASA/Headquarters  • John Juhasz, NASA/Johnson Space Center Olga Dominguez, NASA/Headquarters  Stephen Kapurch, NASA/Headquarters ■◆•Rajiv Doreswamy, NASA/Headquarters ■ Jason Kastner, NASA/Jet Propulsion Laboratory Larry Dyer, NASA/Johnson Space Center  Kristen Kehrer, NASA/Kennedy Space Center Nelson Eng, NASA/Johnson Space Center  John Kelly, NASA/Headquarters Patricia Eng, NASA/Headquarters  Kriss Kennedy, NASA/Johnson Space Center  NASA Systems Engineering Handbook  xv
  • 17. AcknowledgmentsSteven Kennedy, NASA/Kennedy Space Center Tracey Kickbusch, NASA/Kennedy Space Center ■ Steve Robbins, NASA/Marshall Space Flight Center  • Dennis Rohn, NASA/Glenn Research Center  ◆Casey Kirchner, NASA/Stennis Space Center  Jim Rose, NASA/Jet Propulsion Laboratory Kenneth Kumor, NASA/Headquarters Janne Lady, SAITECH/CSC  Arnie Ruskin,* NASA/Jet Propulsion Laboratory  • Harry Ryan, NASA/Stennis Space Center Jerry Lake, Systems Management international Kenneth W. Ledbetter, NASA/Headquarters ■ George Salazar, NASA/Johnson Space Center Steve Leete, NASA/Goddard Space Flight Center  Nina Scheller, NASA/Ames Research Center ■William Lincoln, NASA/Jet Propulsion Laboratory  Pat Schuler, NASA/Langley Research Center  •Dave Littman, NASA/Goddard Space Flight Center  Randy Seftas, NASA/Goddard Space Flight Center John Lucero, NASA/Glenn Research Center  Joey Shelton, NASA/Marshall Space Flight Center  •Paul Luz, NASA/Marshall Space Flight Center  Robert Shishko, NASA/Jet Propulsion Laboratory  ◆Todd MacLeod, NASA/Marshall Space Flight Center  Burton Sigal, NASA/Jet Propulsion Laboratory Roger Mathews, NASA/Kennedy Space Center  • Sandra Smalley, NASA/Headquarters Bryon Maynard, NASA/Stennis Space Center  Richard Smith, NASA/Kennedy Space Center Patrick McDuffee, NASA/Marshall Space Flight Center  John Snoderly, Defense Acquisition University Mark McElyea, NASA/Marshall Space Flight Center  Richard Sorge, NASA/Glenn Research Center William McGovern, Defense Acquisition University ◆ Michael Stamatelatos, NASA/Headquarters ■Colleen McGraw, NASA/Goddard Space Flight Tom Sutliff, NASA/Glenn Research Center  • Center  ◆• Todd Tofil, NASA/Glenn Research Center Melissa McGuire, NASA/Glenn Research Center  John Tinsley, NASA/Headquarters Don Mendoza, NASA/Ames Research Center  Rob Traister, Graphic Designer ◆Leila Meshkat, NASA/Jet Propulsion Laboratory  Clayton Turner, NASA/Langley Research Center ■Elizabeth Messer, NASA/Stennis Space Center  • Paul VanDamme, NASA/Jet Propulsion Laboratory Chuck Miller, NASA/Headquarters  Karen Vaner, NASA/Stennis Space Center Scott Mimbs, NASA/Kennedy Space Center  Lynn Vernon, NASA/Johnson Space Center Steve Newton, NASA/Marshall Space Flight Center Tri Nguyen, NASA/Johnson Space Center  Linda Voss, Technical Writer ◆Chuck Niles, NASA/Langley Research Center  • Britt Walters, NASA/Johnson Space Center ■ Tommy Watts, NASA/Marshall Space Flight Center Cynthia Null, NASA/NASA Engineering and Safety Center  Richard Weinstein, NASA/Headquarters John Olson, NASA/Headquarters  Katie Weiss, NASA/Jet Propulsion Laboratory  •Tim Olson, QIC, Inc.  Martha Wetherholt, NASA/Headquarters Sam Padgett, NASA/Johnson Space Center  Becky Wheeler, NASA/Jet Propulsion Laboratory Christine Powell, NASA/Stennis Space Center ◆• ■ Cathy White, NASA/Marshall Space Flight Center Steve Prahst, NASA/Glenn Research Center  Reed Wilcox, NASA/Jet Propulsion Laboratory Pete Prassinos, NASA/Headquarters ■ Barbara Woolford, NASA/Johnson Space Center  •Mark Prill, NASA/Marshall Space Flight Center  Felicia Wright, NASA/Langley Research Center Neil Rainwater, NASA/Marshall Space Flight Center ■◆ Robert Youngblood, ISL Inc. Ron Ray, NASA/Dryden Flight Research Center  Tom Zang, NASA/Langley Research Center Gary Rawitscher, NASA/Headquarters Joshua Reinert, ISL Inc. Norman Rioux, NASA/Goddard Space Flight Center  *In memory of.xvi  NASA Systems Engineering Handbook
  • 18. 1.0 Introduction1.1 Purpose large and small NASA programs and projects. NASA has defined different life cycles that specifically addressThis handbook is intended to provide general guidance the major project categories, or product lines, whichand information on systems engineering that will be are: Flight Systems and Ground Support (FS&GS), Re-useful to the NASA community. It provides a generic de- search and Technology (R&T), Construction of Facili-scription of Systems Engineering (SE) as it should be ap- ties (CoF), and Environmental Compliance and Resto-plied throughout NASA. A goal of the handbook is to in- ration (ECR). The technical content of the handbookcrease awareness and consistency across the Agency and provides systems engineering best practices that shouldadvance the practice of SE. This handbook provides per- be incorporated into all NASA product lines. (Checkspectives relevant to NASA and data particular to NASA. the NASA On-Line Directives Information SystemThis handbook should be used as a companion for im- (NODIS) electronic document library for applicableplementing NPR 7123.1, Systems Engineering Processes NASA directives on topics such as product lines.) Forand Requirements, as well as the Center-specific hand- simplicity this handbook uses the FS&GS product linebooks and directives developed for implementing sys- as an example. The specifics of FS&GS can be seen intems engineering at NASA. It provides a companion ref- the description of the life cycle and the details of theerence book for the various systems engineering related milestone reviews. Each product line will vary in thesecourses being offered under NASA’s auspices. two areas; therefore, the reader should refer to the ap- plicable NASA procedural requirements for the specific requirements for their life cycle and reviews. The en-1.2 Scope and Depth gineering of NASA systems requires a systematic andThe coverage in this handbook is limited to general disciplined set of processes that are applied recursivelyconcepts and generic descriptions of processes, tools, and iteratively for the design, development, operation,and techniques. It provides information on systems en- maintenance, and closeout of systems throughout thegineering best practices and pitfalls to avoid. There are life cycle of the programs and projects.many Center-specific handbooks and directives as well as The handbook’s scope properly includes systems engi-textbooks that can be consulted for in-depth tutorials. neering functions regardless of whether they are per-This handbook describes systems engineering as it should formed by a manager or an engineer, in-house, or by abe applied to the development and implementation of contractor. NASA Systems Engineering Handbook  1
  • 19. 2.0 Fundamentals of Systems EngineeringSystems engineering is a methodical, disciplined ap- and not favor one system/subsystem at the expenseproach for the design, realization, technical manage- of another. The art is in knowing when and where toment, operations, and retirement of a system. A “system” probe. Personnel with these skills are usually tagged asis a construct or collection of different elements that to- “systems engineers.” They may have other titles—leadgether produce results not obtainable by the elements systems engineer, technical manager, chief engineer—alone. The elements, or parts, can include people, hard- but for this document, we will use the term systems en-ware, software, facilities, policies, and documents; that is, gineer.all things required to produce system-level results. The The exact role and responsibility of the systems engi-results include system-level qualities, properties, char- neer may change from project to project depending onacteristics, functions, behavior, and performance. The the size and complexity of the project and from phasevalue added by the system as a whole, beyond that con- to phase of the life cycle. For large projects, there maytributed independently by the parts, is primarily created be one or more systems engineers. For small projects,by the relationship among the parts; that is, how they are sometimes the project manager may perform theseinterconnected.1 It is a way of looking at the “big picture” practices. But, whoever assumes those responsibilities,when making technical decisions. It is a way of achieving the systems engineering functions must be performed.stakeholder functional, physical, and operational perfor- The actual assignment of the roles and responsibilitiesmance requirements in the intended use environment of the named systems engineer may also therefore vary.over the planned life of the systems. In other words, sys- The lead systems engineer ensures that the system tech-tems engineering is a logical way of thinking. nically fulfills the defined needs and requirements andSystems engineering is the art and science of devel- that a proper systems engineering approach is being fol-oping an operable system capable of meeting require- lowed. The systems engineer oversees the project’s sys-ments within often opposed constraints. Systems engi- tems engineering activities as performed by the tech-neering is a holistic, integrative discipline, wherein the nical team and directs, communicates, monitors, andcontributions of structural engineers, electrical engi- coordinates tasks. The systems engineer reviews andneers, mechanism designers, power engineers, human evaluates the technical aspects of the project to ensurefactors engineers, and many more disciplines are evalu- that the systems/subsystems engineering processes areated and balanced, one against another, to produce a co- functioning properly and evolves the system from con-herent whole that is not dominated by the perspective of cept to product. The entire technical team is involved ina single discipline.2 the systems engineering process. The systems engineer will usually play the key role inSystems engineering seeks a safe and balanced design in leading the development of the system architecture, de-the face of opposing interests and multiple, sometimes fining and allocating requirements, evaluating designconflicting constraints. The systems engineer must de- tradeoffs, balancing technical risk between systems, de-velop the skill and instinct for identifying and focusing fining and assessing interfaces, providing oversight ofefforts on assessments to optimize the overall design verification and validation activities, as well as many other tasks. The systems engineer will usually have the 1 Rechtin, Systems Architecting of Organizations: Why Eagles prime responsibility in developing many of the projectCan’t Swim. documents, including the Systems Engineering Manage- 2 Comments on systems engineering throughout Chapter 2.0are extracted from the speech “System Engineering and the ment Plan (SEMP), requirements/specification docu-Two Cultures of Engineering” by Michael D. Griffin, NASA ments, verification and validation documents, certifica-Administrator. tion packages, and other technical documentation. NASA Systems Engineering Handbook  3
  • 20. 2.0 Fundamentals of Systems EngineeringIn summary, the systems engineer is skilled in the art NASA Systems Engineering Processes and Requirements.and science of balancing organizational and technical in- Each will be described in much greater detail in sub-teractions in complex systems. However, since the entire sequent chapters of this document, but an overview isteam is involved in the systems engineering approach, given below.in some ways everyone is a systems engineer. Systemsengineering is about tradeoffs and compromises, aboutgeneralists rather than specialists. Systems engineering is 2.1 The Common Technical Processesabout looking at the “big picture” and not only ensuring and the SE Enginethat they get the design right (meet requirements) but There are three sets of common technical processes inthat they get the right design. NPR 7123.1, NASA Systems Engineering Processes andTo explore this further, put SE in the context of project Requirements: system design, product realization, andmanagement. As discussed in NPR 7120.5, NASA Space technical management. The processes in each set andFlight Program and Project Management Requirements, their interactions and flows are illustrated by the NPRproject management is the function of planning, over- systems engineering “engine” shown in Figure 2.1-1. Theseeing, and directing the numerous activities required processes of the SE engine are used to develop and realizeto achieve the requirements, goals, and objectives of the the end products. This chapter provides the applicationcustomer and other stakeholders within specified cost, context of the 17 common technical processes requiredquality, and schedule constraints. Project management in NPR 7123.1. The system design processes, the prod-can be thought of as having two major areas of emphasis, uct realization processes, and the technical managementboth of equal weight and importance. These areas are processes are discussed in more details in Chapters 4.0,systems engineering and project control. Figure 2.0-1 is 5.0, and 6.0, respectively. Steps 1 through 9 indicated ina notional graphic depicting this concept. Note that there Figure 2.1-1 represent the tasks in execution of a project.are areas where the two cornerstones of project manage- Steps 10 through 17 are crosscutting tools for carryingment overlap. In these areas, SE provides the technical out the processes.aspects or inputs; whereas project control provides the  System Design Processes: The four system designprogrammatic, cost, and schedule inputs. processes shown in Figure 2.1-1 are used to define and baseline stakeholder expectations, generateThis document will focus on the SE side of the diagram. and baseline technical requirements, and convertThese practices/processes are taken from NPR 7123.1, the technical requirements into a design solution that will satisfy the base- lined stakeholder expecta- tions. These processes are SYSTEMS ENGINEERING PROJECT CONTROL applied to each product of the system structure from � System Design the top of the structure to – Requirements De nition – Technical Solution De nition � Planning � Management Planning the bottom until the lowest � Product Realization � Risk Management � Integrated Assessment products in any system � Con guration – Design Realization � Schedule Management structure branch are de- – Evaluation Management � Con guration Management � Data Management � Resource Management fined to the point where – Product Transition � Technical Management � Assessment � Documentation and Data they can be built, bought, – Technical Planning � Decision Analysis Management or reused. All other prod- – Technical Control � Acquisition Management ucts in the system structure – Technical Assessment are realized by integration. – Technical Decision Analysis Designers not only develop the design solutions to the products intended to per- form the operational func- Figure 2.0‑1 SE in context of overall project management tions of the system, but also4  NASA Systems Engineering Handbook
  • 21. 2.1 The Common Technical Processes and the SE Engine Requirements flow down Realized products from level above to level above TECHNICAL MANAGEMENT PROCESSES SYSTEM PRODUCT DESIGN Technical Planning REALIZATION PROCESSES Process PROCESSES 10. Technical Planning Requirements Definition Product Transition Process Processes Technical Control 9. Product Transition 1. Stakeholder Expectations Processes Definition 11. Requirements Management 2. Technical Requirements 12. Interface Management Evaluation Processes Definition 13. Technical Risk Management 7. Product Verification 14. Configuration Management 8. Product Validation 15. Technical Data Management Technical Solution Design Realization Definition Processes Technical Assessment Process Processes 3. Logical Decomposition 5. Product Implementation 16. Technical Assessment 4. Design Solution Definition 6. Product Integration Technical Decision Analysis Process 17. Decision Analysis Requirements flow down Realized products to level below from level below System design processes Product realization processes applied to each work breakdown applied to each product structure model down and up and across across system structure system structure Figure 2.1‑1 The systems engineering engine establish requirements for the products and services the project through to completion, and to aid in the that enable each operational/mission product in the decisionmaking process. system structure. The processes within the SE engine are used both itera- Product Realization Processes: The product realiza- tively and recursively. As defined in NPR 7123.1, “itera- tion processes are applied to each operational/mis- tive” is the “application of a process to the same product sion product in the system structure starting from or set of products to correct a discovered discrepancy the lowest level product and working up to higher or other variation from requirements,” whereas “recur- level integrated products. These processes are used to sive” is defined as adding value to the system “by the create the design solution for each product (e.g., by repeated application of processes to design next lower the Product Implementation or Product Integration layer system products or to realize next upper layer end Process) and to verify, validate, and transition up to products within the system structure. This also applies to the next hierarchical level products that satisfy their repeating application of the same processes to the system design solutions and meet stakeholder expectations as structure in the next life-cycle phase to mature the a function of the applicable life-cycle phase. system definition and satisfy phase success criteria.” The Technical Management Processes: The technical example used in Section 2.3 will further explain these management processes are used to establish and concepts. The technical processes are applied recursively evolve technical plans for the project, to manage and iteratively to break down the initializing concepts of communication across interfaces, to assess progress the system to a level of detail concrete enough that the against the plans and requirements for the system technical team can implement a product from the infor- products or services, to control technical execution of mation. Then the processes are applied recursively and NASA Systems Engineering Handbook  5
  • 22. 2.0 Fundamentals of Systems Engineeringiteratively to integrate the smallest product into greater from Pre-Phase A through Phase D. Please note thatand larger systems until the whole of the system has been NASA’s management has structured Phases C and Dassembled, verified, validated, and transitioned. to “split” the technical development processes in half in Phases C and D to ensure closer management control. The2.2 An Overview of the SE Engine by engine is bound by a dashed line in Phases C and D. Project Phase Once a project enters into its operational state (Phase E)Figure 2.2-1 conceptually illustrates how the SE engine and closes with a closeout phase (Phase F), the technicalis used during each of the seven phases of a project. work shifts to activities commensurate with these lastFigure 2.2-1 is a conceptual diagram. For all of the de- two project phases.tails, refer to the poster version of this figure, which ac- The next major horizontal band shows the eight tech-companies this handbook. nical management processes (steps 10 through 17) inThe uppermost horizontal portion of this chart is used as each project phase. The SE engine cycles the technicala reference to project system maturity, as the project pro- management processes seven times from Pre-Phase Agresses from a feasible concept to an as-deployed system; through Phase F.phase activities; Key Decision Points (KDPs); and majorproject reviews. Each of the engine entries is given a 6105 paragraph label that is keyed to Chapters 4.0, 5.0, and 6.0 in this hand-The next major horizontal band shows the technical de- book. For example, in the technical development pro-velopment processes (steps 1 through 9) in each project cesses, “Get Stakeholder Expectations” discussions andphase. The systems engineering engine cycles five times details are in Section 4.1. Formulation Approval Implementation Pre-Phase A: Phase A: Phase B: Phase C: Phase D: Phase E: Phase F: Concept Studies Concept & Technology Preliminary Design & Final Design & System Assembly, Operations & Closeout Development Technology Completion Fabrication Integration & Test, Launch Sustainment Allocated Product Feasible Concept Top-Level Architecture Functional Baseline Baseline Baseline As-Deployed Baseline Key Decision Points: Major Reviews: ? ? ? ? 4.1 4.1Technical Development ? 4.2 5.5 4.2 5.5 ? 4.3 5.4 4.3 5.4 4.4 5.3 5.2 4.4 5.3 5.2 ? 5.1 ? ? ? 5.1 6.1 6.1 6.1 6.1 6.1 6.1 6.1 6.2Technical Management 6.3 6.4 6.5 6.6 6.7 6.8 6.8 6.8 6.8 6.8 6.8 6.8 Figure 2.2‑1 A miniaturized conceptualization of the poster‑size NASA project life‑cycle process flow for flight and ground systems accompanying this handbook6  NASA Systems Engineering Handbook
  • 23. 2.3 Example of Using the SE Engine2.3 Example of Using the SE Engine In Pre-Phase A, the SE engine is used to develop the initial concepts; develop a preliminary/draft set of keyTo help in understanding how the SE engine is applied, high-level requirements; realize these concepts throughan example will be posed and walked through the pro- modeling, mockups, simulation, or other means; andcesses. Pertinent to this discussion are the phases of the verify and validate that these concepts and productsprogram and project life cycles, which will be discussed would be able to meet the key high-level requirements.in greater depth in Chapter 3.0 of this document. As de- Note that this is not the formal verification and valida-scribed in Chapter 3.0, NPR 7120.5 defines the life cycle tion program that will be performed on the final productused for NASA programs and projects. The life-cycle but is a methodical runthrough ensuring that the con-phases are described in Table 2.3-1. cepts that are being developed in this Pre-Phase A would be able to meet the likely requirements and expectationsUse of the different phases of a life cycle allows the var- of the stakeholders. Concepts would be developed to theious products of a project to be gradually developed and lowest level necessary to ensure that the concepts are fea-matured from initial concepts through the fielding of the sible and to a level that will reduce the risk low enough toproduct and to its final retirement. The SE engine shown satisfy the project. Academically, this process could pro-in Figure 2.1-1 is used throughout all phases. ceed down to the circuit board level for every system. Table 2.3‑1 Project Life‑Cycle Phases Phase Purpose Typical Output Pre-Phase A To produce a broad spectrum of ideas and alternatives for missions Feasible system concepts Concept from which new programs/projects can be selected. Determine feasi- in the form of simulations, Studies bility of desired system, develop mission concepts, draft system-level analysis, study reports, requirements, identify potential technology needs. models, and mockups Phase A To determine the feasibility and desirability of a suggested new major System concept definition Formulation Concept and system and establish an initial baseline compatibility with NASA’s stra- in the form of simulations, Technology tegic plans. Develop final mission concept, system-level requirements, analysis, engineering Development and needed system structure technology developments. models, and mockups and trade study definition Phase B To define the project in enough detail to establish an initial baseline End products in the form Preliminary capable of meeting mission needs. Develop system structure end of mockups, trade study Design and product (and enabling product) requirements and generate a prelimi- results, specification and Technology nary design for each system structure end product. interface documents, and Completion prototypes Phase C To complete the detailed design of the system (and its associated End product detailed Final Design subsystems, including its operations systems), fabricate hardware, and designs, end product and Fabrication code software. Generate final designs for each system structure end component fabrication, product. and software development Phase D To assemble and integrate the products to create the system, mean- Operations-ready system Implementation System while developing confidence that it will be able to meet the system end product with sup- Assembly, requirements. Launch and prepare for operations. Perform system porting related enabling Integration and end product implementation, assembly, integration and test, and products Test, Launch transition to use. Phase E To conduct the mission and meet the initially identified need and Desired system Operations and maintain support for that need. Implement the mission operations Sustainment plan. Phase F To implement the systems decommissioning/disposal plan developed Product closeout Closeout in Phase E and perform analyses of the returned data and any returned samples. NASA Systems Engineering Handbook  7
  • 24. 2.0 Fundamentals of Systems EngineeringHowever, that would involve a great deal of time and correspond to the num- 10 1money. There may be a higher level or tier of product bered processes within 2 11 12 9 7than circuit board level that would enable designers to the SE engine as shown 13 14 8 3accurately determine the feasibility of accomplishing the in Figure 2.1-1. The var- 4 15 5 16 6project (purpose of Pre-Phase A). ious layers of the product 17 hierarchy will be calledDuring Phase A, the recursive use of the SE engine is “tiers.” Tiers are also Figure 2.3‑1 SE enginecontinued, this time taking the concepts and draft key called “layers,” or “levels.” tracking iconrequirements that were developed and validated during But basically, the higherPre-Phase A and fleshing them out to become the set of the number of the tier or level, the lower in the productbaseline system requirements and Concept of Opera- hierarchy the product is going and the more detailed thetions (ConOps). During this phase, key areas of high risk product is becoming (e.g., going from boxes, to circuitmight be simulated or prototyped to ensure that the con- boards, to components).cepts and requirements being developed are good onesand to identify verification and validation tools and tech- 2.3.2 Example Premiseniques that will be needed in later phases. NASA decides that there is a need for a transportationDuring Phase B, the SE engine is applied recursively to system that will act like a “truck” to carry large piecesfurther mature requirements for all products in the de- of equipment and crew into Low Earth Orbit (LEO).veloping product tree, develop ConOps preliminary de- Referring back to the project life cycle, the project firstsigns, and perform feasibility analysis of the verification enters the Pre-Phase A. During this phase, several con-and validation concepts to ensure the designs will likely cept studies are performed, and it is determined that itbe able to meet their requirements. is feasible to develop such a “space truck.” This is deter- mined through combinations of simulations, mockups,Phase C again uses the left side of the SE engine to fi- analyses, or other like means. For simplicity, assume fea-nalize all requirement updates, finalize ConOps, develop sibility will be proven through concept models. The pro-the final designs to the lowest level of the product tree, cesses and framework of the SE engine will be used toand begin fabrication. Phase D uses the right side of the design and implement these models. The project wouldSE engine to recursively perform the final implementa- then enter the Phase A activities to take the Pre-Phase Ation, integration, verification, and validation of the end concepts and refine them and define the system require-product, and at the final pass, transition the end product ments for the end product. The detailed example willto the user. The technical management processes of the begin in Phase A and show how the SE engine is used.SE engine are used in Phases E and F to monitor perfor- As described in the overview, a similar process is usedmance; control configuration; and make decisions asso- for the other project phases.ciated with the operations, sustaining engineering, andcloseout of the system. Any new capabilities or upgrades 2.3.2.1 Example Phase A System Design Passesof the existing system would reenter the SE engine asnew developments. First Pass Taking the preliminary concepts and drafting key system2.3.1 Detailed Example requirements developed during the Pre-Phase A activi-Since it is already well known, the NASA Space Trans- 10 ties, the SE engine is en-portation System (STS) will be used as an example to 1 2 11 9 tered at the first process 12look at how the SE engine would be used in Phase A. 13 7 8 and used to determine 14This example will be simplified to illustrate the applica- 3 4 15 5 who the product (i.e., 6tion of the SE processes in the engine, but will in no way 16 the STS) stakeholders 17be as detailed as necessary to actually build the highly are and what they want.complex vehicle. The SE engine is used recursively to During Pre-Phase A these needs and expectations weredrive out more and more detail with each pass. The icon pretty general ideas, probably just saying the Agencyshown in Figure 2.3-1 will be used to keep track of the ap- needs a “space truck” that will carry X tons of payloadplicable place in the SE engine. The numbers in the icon into LEO, accommodate a payload of so-and-so size,8  NASA Systems Engineering Handbook
  • 25. 2.3 Example of Using the SE Enginecarry a crew of seven, etc. During this Phase A pass, Spacethese general concepts are detailed out and agreed to. Tier 0 Transportation SystemThe ConOps (sometimes referred to as operational con-cept) generated in Pre-Phase A is also detailed out and External Orbiter Solidagreed to to ensure all stakeholders are in agreement as Tier 1 Tank Rocket Boosterto what is really expected of the product—in this casethe transportation system. The detailed expectations are Figure 2.3‑2 Product hierarchy, tier 1: first passthen converted into good requirement statements. (For through the SE enginemore information on what constitutes a good require-ment, see Appendix C.) Subsequent passes and subse- pretty high, general level. Note that the SE processes onquent phases will refine these requirements into specifi- the right side (i.e., the product realization processes) ofcations that can actually be built. Also note that all of the the SE engine have yet to be addressed. The design musttechnical management processes (SE engine processes first be at a level that something can actually be built,numbered 10 through 17) are also used during this and coded, or reused before that side of the SE engine can beall subsequent passes and activities. These ensure that all used. So, a second pass of the left side of the SE enginethe proper planning, control, assessment, and decisions will be started.are used and maintained. Although for simplificationthey will not be mentioned in the rest of this example, Second Passthey will always be in effect. The SE engine is completely recursive. That is, each of 10 Next, using the require- the three elements shown in the tier 1 diagram can now 1 2 11 9 ments and the ConOps 10 be considered a product 12 13 14 7 8 previously developed, 1 11 9 of its own and the SE en- 3 2 12 4 15 5 logical decomposition 13 7 8 gine is therefore applied 16 6 14 models/diagrams are 3 4 15 5 to each of the three ele- 17 built up to help bring the 16 6 ments separately. For ex- 17requirements into perspective and to show their relation- ample, the external tankship. Finally, these diagrams, requirements, and ConOps is considered an end product and the SE engine resetsdocuments are used to develop one or more feasible de- back to the first processes. So now, just focusing on thesign solutions. Note that at this point, since this is only the external tank, who are the stakeholders and what theyfirst pass through the SE engine, these design solutions expect of the external tank is determined. Of course, oneare not detailed enough to actually build anything. Con- of the main stakeholders will be the owners of the tier 1sequently, the design solutions might be summarized as, requirements and the STS as an end product, but there“To accomplish this transportation system, the best op- will also be other new stakeholders. A new ConOps fortion in our trade studies is a three-part system: a reus- how the external tank would operate is generated. Theable orbiter for the crew and cargo, a large external tank tier 1 requirements that are applicable (allocated) to theto hold the propellants, and two solid rocket boosters to external tank would be “flowed down” and validated.give extra power for liftoff that can be recovered, refur- Usually, some of these will be too general to implementbished, and reused.” (Of course, the actual design solu- into a design, so the requirements will have to be de-tion would be much more descriptive and detailed). So, tailed out. To these derived requirements, there will alsofor this first pass, the first tier of the product hierarchy be added new requirements that are generated from themight look like Figure 2.3-2. There would also be other stakeholder expectations, and other applicable standardsenabling products that might appear in the product tree, for workmanship, safety, quality, etc.but for simplicity only, the main products are shown inthis example. Next, the external tank requirements and the external tank ConOps are established, and functional diagramsNow, obviously design solution is not yet at a detailed are developed as was done in the first pass with the STSenough level to actually build the prototypes or models product. Finally, these diagrams, requirements, andof any of these products. The requirements, ConOps, ConOps documents are used to develop some feasiblefunctional diagrams, and design solutions are still at a design solutions for the external tank. At this pass, there NASA Systems Engineering Handbook  9
  • 26. 2.0 Fundamentals of Systems Engineering 10 will also not be enough solution might be summarized as, “To build this orbiter 1 2 11 12 9 detail to actually build will require a winged vehicle with a thermal protection 7 13 14 8 or prototype the external system; an avionics system; a guidance, navigation, and 3 4 15 5 tank. The design solution control system; a propulsion system; an environmental 16 6 17 might be summarized as, control system; etc.” So the tier 2 product tree for the “To build this external orbiter element might look like Figure 2.3-4.tank, since our trade studies showed the best option wasto use cryogenic propellants, a tank for the liquid hy- Spacedrogen will be needed as will another tank for the liquid Tier 0 Transportationoxygen, instrumentation, and an outer structure of alu- Systemminum coated with foam.” Thus, the tier 2 product treefor the external tank might look like Figure 2.3-3. External Orbiter Solid Tier 1 Tank Rocket Space BoosterTier 0 Transportation System External Thermal Avionics Environ- Etc. Structure Protection System mental Tier 2 External Orbiter Solid System ControlTier 1 Tank Rocket System Booster Figure 2.3‑4 Product hierarchy, tier 2: orbiter Hydrogen Oxygen External Instru-Tier 2 Tank Tank Structure mentation Likewise, the solid rocket booster would also be consid- ered an end product, and a pass through the SE engine Figure 2.3‑3 Product hierarchy, tier 2: would generate a tier 2 design concept, just as was done external tank with the external tank and the orbiter. 10 In a similar manner, the Third Pass 1 2 11 12 9 orbiter would also take Each of the tier 2 elements is also considered an end 7 13 14 8 another pass through the product, and each undergoes another pass through 3 4 15 5 SE engine starting with the SE engine, defining 16 6 10 17 identifying the stake- 1 11 9 stakeholders, generating 2 holders and their expec- 12 13 7 ConOps, flowing down 8tations, and generating a ConOps for the orbiter element. 3 14 15 allocated requirements, 4 5The tier 1 requirements that are applicable (allocated) to 16 6 generating new and de-the orbiter would be “flowed down” and validated; new 17 rived requirements, andrequirements derived from them and any additional developing functional diagrams and design solutionrequirements (including interfaces with the other ele- concepts. As an example of just the avionics system el-ments) would be added. ement, the tier 3 product hierarchy tree might look like Figure 2.3-5. 10 Next, the orbiter require- 1 2 11 9 ments and the ConOps 12 Passes 4 Through n 13 14 7 8 are taken, functional di- 3 4 15 5 agrams are developed, For this Phase A set of passes, this recursive process is 16 6 and one or more feasible continued for each product (model) on each tier down 17 design solutions for the to the lowest level in the product tree. Note that in someorbiter are generated. As with the external tank, at this projects it may not be feasible, given an estimated projectpass, there will not be enough detail to actually build or cost and schedule, to perform this recursive process com-do a complex model of the orbiter. The orbiter design pletely down to the smallest component during Phase A.10  NASA Systems Engineering Handbook
  • 27. 2.3 Example of Using the SE Engine SpaceTier 0 Transportation System External Orbiter SolidTier 1 Tank Rocket Booster External Thermal Avionics Environmental Etc.Tier 2 Structure Protection System Control System System Communication Instrumentation Command & Data Displays & Etc.Tier 3 System System Handling System Controls Figure 2.3‑5 Product hierarchy, tier 3: avionics system 10 In these cases, engi- ferent amounts of time to reach the bottom. Thus, for 1 2 11 12 9 neering judgment must any given program or project, products will be at var- 7 13 14 8 be used to determine ious stages of development. For this Phase A example, 3 4 15 5 what level of the product Figure 2.3-6 depicts the STS product hierarchy after com- 16 6 17 tier is feasible. Note that pletely passing through the system design processes side the lowest feasible level of the SE engine. At the end of this set of passes, systemmay occur at different tiers depending on the product- requirements, ConOps, and high-level conceptual func-line complexity. For example, for one product line it may tional and physical architectures for each product in theoccur at tier 2; whereas, for a more complex product, it tree would exist. Note that these would not yet be thecould occur at tier 8. This also means that it will take dif- detailed or even preliminary designs for the end prod- SpaceTier 0 Transportation System External Orbiter SolidTier 1 Tank Rocket BoosterTier 2 A B C n A B C n A BTier 3 Aa Ab Ba Bb Ca Cn Aa Ab Ba Bb Ca Cn Aa Ab Ba BbTier 4 Aba Abb Caa Cab Aaa Aab Baa Bab Caa Cab Aaa Aab Bba BbbTier 5 Caba Cabb Baaa Baab Caba Cabb Bbaa BbbbTier 6 Baaba Baabb Figure 2.3‑6 Product hierarchy: complete pass through system design processes side of the SE engineNote: The unshaded boxes represent bottom-level phase products. NASA Systems Engineering Handbook  11
  • 28. 2.0 Fundamentals of Systems Engineeringucts. These will come later in the life cycle. At this point, is realized in this first pass. The models will help us un-enough conceptual design work has been done to ensure derstand and plan the method to implement the finalthat at least the high-risk requirements are achievable as end product and will ensure the feasibility of the imple-will be shown in the following passes. mented method.2.3.2.2 Example Product Realization Passes 10 Next, each of the realized 1 2 11 9 models (phase products)So now that the requirements and conceptual designs for 12 7 13 8 are used to verify that thethe principal Phase A products have been developed, they 3 14 4 15 5 end product would likelyneed to be checked to ensure they are achievable. Note that 16 6 meet the requirements asthere are two types of products. The first product is the 17 defined in the Technical“end product”—the one that will actually be delivered to Requirements Definition Process during the system de-the final user. The second type of product will be called sign pass for this product. This shows the product woulda “phase product.” A phase product is generated within a likely meet the “shall” statements that were allocated,particular life-cycle phase that helps move the project to- derived, or generated for it by method of test, analysis,ward delivering a final product. For example, while in Pre- inspection, or demonstration—that it was “built right.”Phase A, a foam-core mockup might be built to help visu- Verification is performed for each of the unshadedalize some of the concepts. Those mockups would not be bottom-level model products. Note that during thisthe final “end product,” but would be the “phase product.” Phase A pass, this process is not the formal verificationFor this Phase A example, assume some computer models of the final end product. However, using analysis, simu-will be created and simulations performed of these key lation, models, or other means shows that the require-concepts to show that they are achievable. These will be ments are good (verifiable) and that the concepts willthe phase product for our example. most likely satisfy them. This also allows draft verifica-Now the focus shifts to the right side (i.e., product real- tion procedures of key areas to be developed. What canization processes) of the SE engine, which will be applied be formally verified, however, is that the phase productrecursively, starting at the bottom of the product hier- (the model) meets the requirements for the model.archy and moving upwards. 10 After the phase product 1 2 11 9 (models) has been veri-First Pass 12 13 14 7 8 fied and used for planningEach of the phase products (i.e., our computer models) 3 4 15 5 the end product verifica-for the bottom-level product tier (ones that are unshaded 16 6 tion, the models are then 17 10 in Figure 2.3-6) is taken used for validation. That 1 11 9 individually and real- is, additional test, analysis, inspection, or demonstrations 2 12 13 7 8 ized—that is, it is either are conducted to ensure that the proposed conceptual de- 14 3 4 15 5 bought, built, coded, or signs will likely meet the expectations of the stakeholders 16 6 reused. For our example, for this phase product and for the end product. This will 17 assume the external tank track back to the ConOps that was mutually developedproduct model Aa is a standard Commercial-Off-the- with the stakeholders during the Stakeholder Expecta-Shelf (COTS) product that is bought. Aba is a model that tions Definition Process of the system design pass forcan be reused from another project, and product Abb is this product. This will help ensure that the project hasa model that will have to be developed with an in-house “built the right” product at this level.design that is to be built. Note that these models areparts of a larger model product that will be assembled 10 After verification andor integrated on a subsequent runthrough of the SE en- 1 2 11 9 validation of the phase 12gine. That is, to realize the model for product Ab of the 13 7 8 product (models) and 3 14external tank, models for products Aba and Abb must 4 15 5 using it for planning the 16 6be first implemented and then later integrated together. verification and valida- 17This pass of the SE engine will be the realizing part. Like- tion of the end product,wise, each of the unshaded bottom-level model products it is time to prepare the model for transition to the next12  NASA Systems Engineering Handbook
  • 29. 2.3 Example of Using the SE Enginelevel up. Depending on complexity, where the model will 10 Likewise, after the inte- 1be transitioned, security requirements, etc., transition 2 11 12 9 grated phase product is 7may involve crating and shipment, transmitting over a 13 14 8 verified, it needs to be 3network, or hand carrying over to the next lab. Whatever 4 15 5 validated to show that it 16 6is appropriate, each model for the bottom-level product 17 meets the expectationsis prepared and handed to the next level up for further as documented in theintegration. ConOps for the model of the product at this level. Even though the component parts making up the integratedSecond Pass product will have been validated at this point, the onlyNow that all the models (phase products) for the bottom- way to know that the project has built the “right” inte-level end products are realized, verified, validated, and grated product is to perform validation on the integrated transitioned, it is time product itself. Again, this information will help in the 10 1 11 9 to start integrating them planning for the validation of the end products. 2 12 13 7 into the next higher level 14 8 10 The model for the inte- 3 15 product. For example, for 4 5 1 2 11 9 grated phase product at 16 6 the external tank, realized 12 17 13 7 8 this level (tier 3 product tier 4 models for product 3 14 4 15 5 Ab for example) is nowAba and Abb are integrated to form the model for the 16 6 ready to be transitionedtier 3 product Ab. Note that the Product Implementation 17 to the next higher levelProcess only occurs at the bottommost product. All sub- (tier 2 for the example). As with the products in the firstsequent passes of the SE engine will employ the Product pass, the integrated phase product is prepared accordingIntegration Process since already realized products will to its needs/requirements and shipped or handed over.be integrated to form the new higher level products. In- In the example, the model for the external tank tier 3 in-tegrating the lower tier phase products will result in the tegrated product Ab is transitioned to the owners of thenext-higher-tier phase product. This integration process model for the tier 2 product A. This effort with the phasecan also be used for planning the integration of the final products will be useful in planning for the transition ofend products. the end products. 10 After the new integrated 1 11 9 phase product (model) Passes 3 Through n 2 12 13 14 7 8 has been formed (tier 3 In a similar manner as the second pass, the tier 3 models 3 4 15 5 product Ab for example), for the products are integrated together, realized, veri- 16 6 it must now be proven 10 fied, validated, and transi- 17 that it meets its require- 1 2 11 9 tioned to the next higher 12ments. These will be the allocated, derived, or generated 13 7 8 tier. For the example, 14requirements developed during the Technical Require- 3 4 15 5 the realized model forments Definition Process during the system design pass 16 6 external tank tier 3 in- 17for the model for this integrated product. This ensures that tegrated phase productthe integrated product was built (assembled) right. Note Ab is integrated with the model for tier 3 realized phasethat just verifying the component parts (i.e., the individual product Aa to form the tier 2 phase product A. Note thatmodels) that were used in the integration is not sufficient tier 3 product Aa is a bottom-tier product that has yetto assume that the integrated product will work right. to go through the integration process. It may also haveThere are many sources of problems that could occur— been realized some time ago and has been waiting for theincomplete requirements at the interfaces, wrong assump- Ab product line to become realized. Part of its transitiontions during design, etc. The only sure way of knowing if might have been to place it in secure storage until thean integrated product is good is to perform verification Ab product line became available. Or it could be that Aaand validation at each stage. The knowledge gained from was the long-lead item and product Ab had been com-verifying this integrated phase product can also be used pleted some time ago and was waiting for the Aa pur-for planning the verification of the final end products. chase to arrive before they could be integrated together. NASA Systems Engineering Handbook  13
  • 30. 2.0 Fundamentals of Systems EngineeringThe length of the branch of the product tree does not nec- final product. As we come out of the last pass of the SEessarily translate to a corresponding length of time. This engine in Phase D, we have the final fully realized endis why good planning in the first part of a project is so product, the STS, ready to be delivered for launch.critical. 2.3.2.4 Example Use of the SE Engine inFinal Pass Phases E and FAt some point, all the models for the tier 1 phase prod- Even in Phase E (Operations and Sustainment) anducts will each have been used to ensure the system re- Phase F (Closeout) of the life cycle, the technical man- 10 quirements and con- 10 agement processes in the 1 11 9 cepts developed during 1 2 11 9 SE engine are still being 2 12 12 13 7 this Phase A cycle can be 13 7 8 used. During the opera- 14 8 14 3 tions phase of a project, 3 4 15 5 implemented, integrated, 4 15 5 6 16 6 verified, validated, and 16 a number of activities are 17 17 transitioned. The ele- still going on. In addi-ments are now defined as the external tank, the orbiter, tion to the day-to-day use of the product, there is a needand the solid rocket boosters. One final pass through to monitor or manage various aspects of the system.the SE engine will show that they will likely be success- This is where the key Technical Performance Measuresfully implemented, integrated, verified, and validated. (TPMs) that were defined in the early stages of devel-The final of these products—in the form of the base- opment continue to play a part. (TPMs are described inlined system requirements, ConOps, conceptual func- Subsection 6.7.2.) These are great measures to monitortional and physical designs—are made to provide inputs to ensure the product continues to perform as designedinto the next life-cycle phase (B) where they will be fur- and expected. Configurations are still under control, stillther matured. In later phases, the products will actually executing the Configuration Management Process. De-be built into physical form. At this stage of the project, cisions are still being made using the Decision Analysisthe key characteristics of each product are passed down- Process. Indeed, all of the technical management pro-stream in key SE documentation, as noted. cesses still apply. For this discussion, the term “systems management” will be used for this aspect of operations.2.3.2.3 Example Use of the SE Engine in In addition to systems management and systems oper- Phases B Through D ation, there may also be a need for periodic refurbish-Phase B begins the preliminary design of the final end ment, repairing broken parts, cleaning, sparing, logis-product. The recursive passes through the SE engine are tics, or other activities. Although other terms are used,repeated in a similar manner to that discussed in the de- for the purposes of this discussion the term “sustainingtailed Phase A example. At this phase, the phase product engineering” will be used for these activities. Again, all ofmight be a prototype of the product(s). Prototypes could the technical management processes still apply to thesebe developed and then put through the planned verifica-tion and validation processes to ensure the design willlikely meet all the requirements and expectations priorto the build of the final flight units. Any mistakes found g Sys rin tem eeon prototypes are much easier and less costly to correct gin sM Enthan if not found until the flight units are built and un- ana Sustaining 10 gementdergoing the certification process. 1 2 11 12 13 9 7 14 8 3 15 4 5Whereas the previous phases dealt with the final product 16 17 6in the form of analysis, concepts, or prototypes, Phases Phase EC and D work with the final end product itself. DuringPhase C, we recursively use the left side of the SE engine Operationto develop the final design. In Phase D, we recursively usethe right side of the SE engine to realize the final product Figure 2.3‑7 Model of typical activities duringand conduct the formal verification and validation of the operational phase (Phase E) of a product14  NASA Systems Engineering Handbook
  • 31. 2.3 Example of Using the SE Engineactivities. Figure 2.3-7 represents these three activities tion of a product shows proof of compliance with require-occurring simultaneously and continuously throughout ments—that the product can meet each “shall” statementthe operational lifetime of the final product. Some por- as proven though performance of a test, analysis, inspec-tions of the SE processes need to continue even after the tion, or demonstration. Validation of a product shows thatsystem becomes nonoperational to handle retirement, the product accomplishes the intended purpose in the in-decommissioning, and disposal. This is consistent with tended environment—that it meets the expectations of thethe basic SE principle of handling the full system life customer and other stakeholders as shown through per-cycle from “cradle to grave.” formance of a test, analysis, inspection, or demonstration.However, if at any point in this phase a new product, a Verification testing relates back to the approved require-change that affects the design or certification of a product, ments set and can be performed at different stages in theor an upgrade to an existing product is needed, the de- product life cycle. The approved specifications, draw-velopment processes of the SE engine are reentered at the ings, parts lists, and other configuration documenta-top. That is, the first thing that is done for an upgrade is tion establish the configuration baseline of that product,to determine who the stakeholders are and what they ex- which may have to be modified at a later time. Without apect. The entire SE engine is used just as for a newly de- verified baseline and appropriate configuration controls,veloped product. This might be pictorially portrayed as in later modifications could be costly or cause major per-Figure 2.3-8. Note that in the figure although the SE engine formance problems.is shown only once, it is used recursively down throughthe product hierarchy for upgraded products, just as de- Validation relates back to the ConOps document. Vali-scribed in our detailed example for the initial product. dation testing is conducted under realistic conditions (or simulated conditions) on end products for the purpose of determining the effectiveness and suitability of the2.4 Distinctions Between Product product for use in mission operations by typical users. Verification and Product The selection of the verification or validation method is Validation based on engineering judgment as to which is the mostFrom a process perspective, the Product Verification and effective way to reliably show the product’s conformanceProduct Validation Processes may be similar in nature, to requirements or that it will operate as intended andbut the objectives are fundamentally different. Verifica- described in the ConOps. Final Deployment to End User Phas eC Syst em eB rin g sM as e ne Ph an age Sustaining Engi 10 1 10 ment 9 Phase D 11 1 2 12 2 11 9 7 12 13 13 7 14 8 14 8 3 15 3 15 4 5 4 5 16 6 16 6 17 Pha A Phase E Phase F seInitial Idea Closeout e A Pre -Phas on Operati Upgrades/Changes Reenter SE Engine at Stakeholder Expectations De nition Figure 2.3‑8 New products or upgrades reentering the SE engine NASA Systems Engineering Handbook  15
  • 32. 2.0 Fundamentals of Systems Engineering2.5 Cost Aspect of Systems System Cost, Effectiveness, and Engineering Cost‑EffectivenessThe objective of systems engineering is to see that the  Cost: The cost of a system is the value of the re-system is designed, built, and operated so that it accom- sources needed to design, build, operate, andplishes its purpose safely in the most cost-effective way dispose of it. Because resources come in manypossible considering performance, cost, schedule, and forms—work performed by NASA personnel andrisk. contractors; materials; energy; and the use of facili- ties and equipment such as wind tunnels, factories,A cost-effective and safe system must provide a partic- offices, and computers—it is convenient to expressular kind of balance between effectiveness and cost: the these values in common terms by using monetarysystem must provide the most effectiveness for the re- units (such as dollars of a specified year).sources expended, or equivalently, it must be the least  Effectiveness: The effectiveness of a system is aexpensive for the effectiveness it provides. This condition quantitative measure of the degree to which theis a weak one because there are usually many designs system’s purpose is achieved. Effectiveness mea-that meet the condition. Think of each possible design sures are usually very dependent upon system per-as a point in the tradeoff space between effectiveness and formance. For example, launch vehicle effective-cost. A graph plotting the maximum achievable effec- ness depends on the probability of successfullytiveness of designs available with current technology as injecting a payload onto a usable trajectory. Thea function of cost would, in general, yield a curved line associated system performance attributes includesuch as the one shown in Figure 2.5-1. (In the figure, all the mass that can be put into a specified nominalthe dimensions of effectiveness are represented by the orbit, the trade between injected mass and launch velocity, and launch availability.ordinate (y axis) and all the dimensions of cost by theabscissa (x axis).) In other words, the curved line repre-  Cost‑Effectiveness: The cost-effectiveness of a sys-sents the envelope of the currently available technology tem combines both the cost and the effectiveness of the system in the context of its objectives. Whilein terms of cost-effectiveness. it may be necessary to measure either or both ofPoints above the line cannot be achieved with currently those in terms of several numbers, it is sometimesavailable technology; that is, they do not represent fea- possible to combine the components into a mean-sible designs. (Some of those points may be feasible in ingful, single-valued objective function for use in de- sign optimization. Even without knowing how tothe future when further technological advances have trade effectiveness for cost, designs that have lowerbeen made.) Points inside the envelope are feasible, but cost and higher effectiveness are always preferred.are said to be dominated by designs whose combinedcost and effectiveness lie on the envelope line. Designsrepresented by points on the envelope line are called Design trade studies, an important part of the systemscost-effective (or efficient or nondominated) solutions. engineering process, often attempt to find designs that provide a better combination of the various dimensions of cost and effectiveness. When the starting point for a There are no designs design trade study is inside the envelope, there are alter- that produce results in natives that either reduce costs with change to the overall this portion of the effectiveness or alternatives that improve effectivenessE ectiveness trade space All possible designs with without a cost increase (i.e., moving closer to the enve- currently known technology lope curve). Then, the systems engineer’s decision is easy. produce results somewhere Other than in the sizing of subsystems, such “win-win” in this portion of the trade space design trades are uncommon, but by no means rare. When the alternatives in a design trade study require trading cost for effectiveness, or even one dimension of Cost effectiveness for another at the same cost (i.e., moving Figure 2.5‑1 The enveloping surface of parallel to the envelope curve), the decisions become nondominated designs harder.16  NASA Systems Engineering Handbook
  • 33. 2.5 Cost Aspect of Systems EngineeringThe process of finding the most cost-effective design isfurther complicated by uncertainty, which is shown in The Systems Engineer’s DilemmaFigure 2.5-2. Exactly what outcomes will be realized by At each cost-effective solution:a particular system design cannot be known in advance  To reduce cost at constant risk, performance mustwith certainty, so the projected cost and effectiveness of a be reduced.design are better described by a probability distributionthan by a point. This distribution can be thought of as a  To reduce risk at constant cost, performance must be reduced.cloud that is thickest at the most likely value and thinnestfarthest away from the most likely point, as is shown for  To reduce cost at constant performance, higherdesign concept A in the figure. Distributions resulting risks must be accepted.from designs that have little uncertainty are dense and  To reduce risk at constant performance, higherhighly compact, as is shown for concept B. Distributions costs must be accepted.associated with risky designs may have significant prob- In this context, time in the schedule is often a criticalabilities of producing highly undesirable outcomes, as is resource, so that schedule behaves like a kind of cost.suggested by the presence of an additional low-effective-ness/high-cost cloud for concept C. (Of course, the en-velope of such clouds cannot be a sharp line such as isshown in the figure, but must itself be rather fuzzy. The fiable—but not unrecognized at the beginning of theline can now be thought of as representing the envelope space race—aspect of its effectiveness. Sputnik (circaat some fixed confidence level, that is, a specific, numer- 1957), for example, drew much of its effectiveness fromical probability of achieving that effectiveness.) the fact that it was a “first.” Costs, the expenditure of lim- ited resources, may be measured in the several dimen-Both effectiveness and cost may require several descrip- sions of funding, personnel, use of facilities, and so on.tors. Even the Echo balloons (circa 1960), in addition Schedule may appear as an attribute of effectiveness orto their primary mission as communications satellites, cost, or as a constraint. A mission to Mars that misses itsobtained scientific data on the electromagnetic environ- launch window has to wait about two years for anotherment and atmospheric drag. Furthermore, Echo was opportunity—a clear schedule constraint.the first satellite visible to the naked eye, an unquanti- In some contexts, it is appropriate to seek the most effec- tiveness possible within a fixed budget and with a fixed risk; in other contexts, it is more appropriate to seek the least cost possible with specified effectiveness and risk. In these cases, there is the question of what level of effec- tiveness to specify or what level of costs to fix. In prac- tice, these may be mandated in the form of performanceE ectiveness or cost requirements. It then becomes appropriate to ask A whether a slight relaxation of requirements could pro- duce a significantly cheaper system or whether a few more resources could produce a significantly more ef- fective system. C B The technical team must choose among designs that differ in terms of numerous attributes. A variety of methods have been developed that can be used to help uncover preferences between attributes and to quantify Cost subjective assessments of relative value. When this can Figure 2.5‑2 Estimates of outcomes to be be done, trades between attributes can be assessed quan- obtained from several design concepts including titatively. Often, however, the attributes seem to be truly uncertainty incommensurate: decisions need to be made in spite ofNote: A, B, and C are design concepts with different risk patterns. this multiplicity. NASA Systems Engineering Handbook  17
  • 34. 3.0 NASA Program/Project Life CycleOne of the fundamental concepts used within NASA for  Phase A: Concept and Technology Development (i.e.,the management of major systems is the program/project define the project and identify and initiate necessarylife cycle, which consists of a categorization of everything technology)that should be done to accomplish a program or project  Phase B: Preliminary Design and Technology Com-into distinct phases, separated by Key Decision Points pletion (i.e., establish a preliminary design and de-(KDPs). KDPs are the events at which the decision au- velop necessary technology)thority determines the readiness of a program/project to  Phase C: Final Design and Fabrication (i.e., completeprogress to the next phase of the life cycle (or to the next the system design and build/code the components)KDP). Phase boundaries are defined so that they provide  Phase D: System Assembly, Integration and Test,more or less natural points for Go or No-Go decisions. Launch (i.e., integrate components, and verify theDecisions to proceed may be qualified by liens that must system, prepare for operations, and launch)be removed within an agreed to time period. A programor project that fails to pass a KDP may be allowed to “go  Phase E: Operations and Sustainment (i.e., operateback to the drawing board” to try again later—or it may and maintain the system)be terminated.  Phase F: Closeout (i.e., disposal of systems and anal- ysis of data)All systems start with the recognition of a need or thediscovery of an opportunity and proceed through var- Figure 3.0-1 (NASA program life cycle) and Figure 3.0-2ious stages of development to a final disposition. While (NASA project life cycle) identify the KDPs and re-the most dramatic impacts of the analysis and optimi- views that characterize the phases. Sections 3.1 and 3.2zation activities associated with systems engineering are contain narrative descriptions of the purposes, majorobtained in the early stages, decisions that affect millions activities, products, and KDPs of the NASA programof dollars of value or cost continue to be amenable to the life-cycle phases. Sections 3.3 to 3.9 contain narrativesystems approach even as the end of the system lifetime descriptions of the purposes, major activities, prod-approaches. ucts, and KDPs of the NASA project life-cycle phases. Section 3.10 describes the NASA budget cycle withinDecomposing the program/project life cycle into phases which program/project managers and systems engi-organizes the entire process into more manageable pieces. neers must operate.The program/project life cycle should provide managerswith incremental visibility into the progress being madeat points in time that fit with the management and bud- 3.1 Program Formulationgetary environments. The program Formulation phase establishes a cost-ef- fective program that is demonstrably capable of meetingNPR 7120.5, NASA Space Flight Program and Project Agency and mission directorate goals and objectives.Management Requirements defines the major NASA The program Formulation Authorization Documentlife-cycle phases as Formulation and Implementation. (FAD) authorizes a Program Manager (PM) to initiateFor Flight Systems and Ground Support (FS&GS) the planning of a new program and to perform the anal-projects, the NASA life-cycle phases of Formulation yses required to formulate a sound program plan. Majorand Implementation divide into the following seven reviews leading to approval at KDP I are the P/SRR,incremental pieces. The phases of the project life cycle P/SDR, PAR, and governing Program Managementare: Council (PMC) review. (See full list of reviews in the Pre-Phase A: Concept Studies (i.e., identify feasible program and project life cycle figures on the next page.) alternatives) A summary of the required gate products for the pro- NASA Systems Engineering Handbook  19
  • 35. 3.0 NASA Program/Project Life Cycle NASA Life- Approval Cycle Phases Formulation Implementation Key Decision Points KDP 0 KDP I KDP II KDP III KDP IV KDP n P/SRR P/SDR PSRs, PIRs, and KDPs are conducted ~ every 2 years Uncoupled & Loosely Coupled Programs PSR Or Major Program Or Reviews Single-Project & Tightly Coupled Programs PDR CDR SIR TRR ORR FRR PLAR CERR PSR Figure 3.0‑1 NASA program life cycle CDR Critical Design Review PLAR Post-Launch Assessment Review CERR Critical Events Readiness Review PRR Production Readiness Review DR Decommissioning Review P/SDR Program/System Definition Review FRR Flight Readiness Review P/SRR Program/System Requirements Review KDP Key Decision Point PSR Program Status Review MCR Mission Concept Review SAR System Acceptance Review MDR Mission Definition Review SDR System Definition Review ORR Operational Readiness Review SIR System Integration Review PDR Preliminary Design Review SRR System Requirements Review PFAR Post-Flight Assessment Review TRR Test Readiness Review PIR Program Implementation Review NASA Life- Approval Cycle Phases Formulation Implementation Phase B: Phase D: Pre-Phase A: Phase A: Preliminary Phase C: Phase E: Project Life- System Assembly, Phase F: Concept Concept & Technology Design & Final Design & Operations & Cycle Phases Technology Integration & Test, Closeout Studies Development Fabrication Sustainment Completion Launch Key Decision Points KDP A KDP B KDP C KDP D KDP E KDP F Launch Human Space Flight Reviews MCR SRR SDR PDR CDR/PRR SIR TRR SAR ORR FRR PLAR CERR PFAR DR Robotic Mission Reviews MCR SRR MDR PDR CDR/PRR SIR TRR ORR FRR PLAR CERR DR Supporting Reviews Peer Reviews, Subsystem Reviews, and System Reviews Figure 3.0‑2 NASA project life cycle20  NASA Systems Engineering Handbook
  • 36. 3.1 Program Formulation For uncoupled and loosely coupled programs, the Program Formulation Implementation phase only requires PSRs and PIRs Purpose to assess the program’s performance and make a rec- To establish a cost-effective program that is demon- ommendation on its authorization at KDPs approx- strably capable of meeting Agency and mission direc- imately every two years. Single-project and tightly torate goals and objectives coupled programs are more complex. For single- project programs, the Implementation phase program Typical Activities and Their Products  Develop program requirements and allocate them reviews shown in Figure 3.0-1 are synonymous (not to initial projects duplicative) with the project reviews in the project life cycle (see Figure 3.0-2) through Phase D. Once in op-  Define and approve program acquisition strategies erations, these programs usually have biennial KDPs  Develop interfaces to other programs preceded by attendant PSRs/PIRs. Tightly coupled  Start development of technologies that cut across programs during implementation have program re- multiple projects within the program views tied to the project reviews to ensure the proper  Derive initial cost estimates and approve a program integration of projects into the larger system. Once in budget operations, tightly coupled programs also have bien-  Perform required program Formulation technical nial PSRs/PIRs/KDPs to assess the program’s perfor- activities defined in NPR 7120.5 mance and authorize its continuation.  Satisfy program Formulation reviews’ entrance/suc- cess criteria detailed in NPR 7123.1 Reviews  P/SRR  P/SDR Program Implementation Purpose To execute the program and constituent projects and ensure the program continues to contribute togram Formulation phase can be found in NPR 7120.5. Agency goals and objectives within funding con- straintsFormulation for all program types is the same, involvingone or more program reviews followed by KDP I where Typical Activities and Their Productsa decision is made approving a program to begin imple-  Initiate projects through direct assignment or com-mentation. Typically, there is no incentive to move a pro- petitive process (e.g., Request for Proposal (RFP),gram into implementation until its first project is ready Announcement of Opportunity (AO)for implementation.  Monitor project’s formulation, approval, implemen- tation, integration, operation, and ultimate decom- missioning3.2 Program Implementation  Adjust program as resources and requirementsDuring the program Implementation phase, the PM changeworks with the Mission Directorate Associate Admin-  Perform required program Implementation techni-istrator (MDAA) and the constituent project man- cal activities from NPR 7120.5agers to execute the program plan cost effectively.  Satisfy program Implementation reviews’ entrance/Program reviews ensure that the program continues success criteria from NPR 7123.1to contribute to Agency and mission directorate goals Reviewsand objectives within funding constraints. A sum-  PSR/PIR (uncoupled and loosely coupled programsmary of the required gate products for the program only)Implementation phase can be found in NPR 7120.5.  Reviews synonymous (not duplicative) with theThe program life cycle has two different implementa- project reviews in the project life cycle (see Fig-tion paths, depending on program type. Each imple- ure 3.0-2) through Phase D (single-project andmentation path has different types of major reviews. tightly coupled programs only) NASA Systems Engineering Handbook  21
  • 37. 3.0 NASA Program/Project Life Cycle3.3 Project Pre-Phase A: Concept usually without central control and mostly oriented to- Studies ward small studies. Its major product is a list of sug- gested projects, based on the identification of needs andThe purpose of this phase, which is usually performed the discovery of opportunities that are potentially con-more or less continually by concept study groups, is to sistent with NASA’s mission, capabilities, priorities, anddevise various feasible concepts from which new proj- resources.ects (programs) can be selected. Typically, this activityconsists of loosely structured examinations of new ideas, Advanced studies may extend for several years and may be a sequence of papers that are only loosely connected. These studies typically focus on establishing mission goals and formulating top-level system requirements Pre‑Phase A: Concept Studies and ConOps. Conceptual designs are often offered to Purpose demonstrate feasibility and support programmatic es- To produce a broad spectrum of ideas and alterna- timates. The emphasis is on establishing feasibility and tives for missions from which new programs/projects desirability rather than optimality. Analyses and designs can be selected are accordingly limited in both depth and number of op- tions. Typical Activities and Products (Note: AO projects will have defined the deliverable products.) 3.4 Project Phase A: Concept and  Identify missions and architecture consistent with Technology Development charter During Phase A, activities are performed to fully develop  Identify and involve users and other stakeholders a baseline mission concept and begin or assume respon-  Identify and perform tradeoffs and analyses sibility for the development of needed technologies. This  Identify requirements, which include: work, along with interactions with stakeholders, helps ▶ Mission, establish a mission concept and the program require- ▶ Science, and ments on the project. ▶ Top-level system.  Define measures of effectiveness and measures of In Phase A, a team—often associated with a program or performance informal project office—readdresses the mission con-  Identify top-level technical performance measures cept to ensure that the project justification and practi- cality are sufficient to warrant a place in NASA’s budget.  Perform preliminary evaluations of possible mis- sions The team’s effort focuses on analyzing mission require- ments and establishing a mission architecture. Activi-  Prepare program/project proposals, which may in- ties become formal, and the emphasis shifts toward es- clude: tablishing optimality rather than feasibility. The effort ▶ Mission justification and objectives; addresses more depth and considers many alternatives. ▶ Possible ConOps; Goals and objectives are solidified, and the project de- ▶ High-level WBSs; ▶ Cost, schedule, and risk estimates; and velops more definition in the system requirements, top- ▶ Technology assessment and maturation strate- level system architecture, and ConOps. Conceptual de- gies. signs are developed and exhibit more engineering detail  Prepare preliminary mission concept report than in advanced studies. Technical risks are identified in more detail, and technology development needs be-  Perform required Pre-Phase A technical activities come focused. from NPR 7120.5  Satisfy MCR entrance/success criteria from NPR 7123.1 In Phase A, the effort focuses on allocating functions to Reviews particular items of hardware, software, personnel, etc.  MCR System functional and performance requirements, along  Informal proposal review with architectures and designs, become firm as system tradeoffs and subsystem tradeoffs iterate back and forth22  NASA Systems Engineering Handbook
  • 38. 3.4 Project Phase A: Concept and Technology Development Phase A: Concept and Technology DevelopmentPurposeTo determine the feasibility and desirability of a suggested new major system and establish an initial baseline compat-ibility with NASA’s strategic plansTypical Activities and Their Products Prepare and initiate a project plan Develop top-level requirements and constraints Define and document system requirements (hardware and software) Allocate preliminary system requirements to next lower level Define system software functionality description and requirements Define and document internal and external interface requirements Identify integrated logistics support requirements Develop corresponding evaluation criteria and metrics Document the ConOps Baseline the mission concept report Demonstrate that credible, feasible design(s) exist Perform and archive trade studies Develop mission architecture Initiate environmental evaluation/National Environmental Policy Act process Develop initial orbital debris assessment (NASA Safety Standard 1740.14) Establish technical resource estimates Define life-cycle cost estimates and develop system-level cost-effectiveness model Define the WBS Develop SOWs Acquire systems engineering tools and models Baseline the SEMP Develop system risk analyses Prepare and initiate a risk management plan Prepare and Initiate a configuration management plan Prepare and initiate a data management plan Prepare engineering specialty plans (e.g., contamination control plan, electromagnetic interference/electromagnetic compatibility control plan, reliability plan, quality control plan, parts management plan) Prepare a safety and mission assurance plan Prepare a software development or management plan (see NPR 7150.2) Prepare a technology development plan and initiate advanced technology development Establish human rating plan Define verification and validation approach and document it in verification and validation plans Perform required Phase A technical activities from NPR 7120.5 Satisfy Phase A reviews’ entrance/success criteria from NPR 7123.1Reviews SRR MDR (robotic mission only) SDR (human space flight only) NASA Systems Engineering Handbook  23
  • 39. 3.0 NASA Program/Project Life Cyclein the effort to seek out more cost-effective designs.(Trade studies should precede—rather than follow— Phase B: Preliminary Design andsystem design decisions.) Major products to this point Technology Completioninclude an accepted functional baseline for the system Purposeand its major end items. The effort also produces var- To define the project in enough detail to establish anious engineering and management plans to prepare for initial baseline capable of meeting mission needsmanaging the project’s downstream processes, such asverification and operations, and for implementing engi- Typical Activities and Their Products  Baseline the project planneering specialty programs.  Review and update documents developed and baselined in Phase A3.5 Project Phase B: Preliminary  Develop science/exploration operations plan based Design and Technology on matured ConOps Completion  Update engineering specialty plans (e.g., contami- nation control plan, electromagnetic interference/During Phase B, activities are performed to establish electromagnetic compatibility control plan, reliabil-an initial project baseline, which (according to NPR ity plan, quality control plan, parts management7120.5 and NPR 7123.1) includes “a formal flow down plan)of the project-level performance requirements to a  Update technology maturation planningcomplete set of system and subsystem design speci-  Report technology development resultsfications for both flight and ground elements” and  Update risk management plan“corresponding preliminary designs.” The technical  Update cost and schedule datarequirements should be sufficiently detailed to estab-  Finalize and approve top-level requirements andlish firm schedule and cost estimates for the project. flowdown to the next level of requirementsIt also should be noted, especially for AO-driven proj-  Establish and baseline design-to specificationsects, that Phase B is where the top-level requirements (hardware and software) and drawings, verificationand the requirements flowed down to the next level and validation plans, and interface documents atare finalized and placed under configuration con- lower levelstrol. While the requirements should be baselined in  Perform and archive trade studies’ resultsPhase A, there are just enough changes resulting from  Perform design analyses and report resultsthe trade studies and analyses in late Phase A and  Conduct engineering development tests and re-early Phase B that changes are inevitable. However, by port resultsmid-Phase B, the top-level requirements should be fi-  Select a baseline design solutionnalized.  Baseline a preliminary design reportActually, the Phase B baseline consists of a collection  Define internal and external interface design solu-of evolving baselines covering technical and business tions (e.g., interface control documents)aspects of the project: system (and subsystem) re-  Define system operations as well as PI/contract pro-quirements and specifications, designs, verification posal management, review, and access and contin-and operations plans, and so on in the technical por- gency planningtion of the baseline, and schedules, cost projections,  Develop appropriate level safety data packageand management plans in the business portion. Es-  Develop preliminary orbital debris assessmenttablishment of baselines implies the implementation  Perform required Phase B technical activities fromof configuration management procedures. (See Sec- NPR 7120.5tion 6.5.)  Satisfy Phase B reviews’ entrance/success criteria from NPR 7123.1In Phase B, the effort shifts to establishing a function-ally complete preliminary design solution (i.e., a func- Reviews  PDRtional baseline) that meets mission goals and objec-  Safety reviewtives. Trade studies continue. Interfaces among the24  NASA Systems Engineering Handbook
  • 40. 3.5 Project Phase B: Preliminary Design and Technology Completionmajor end items are defined. Engineering test items spacecraft mass or increase in its cost) are recognizedmay be developed and used to derive data for further early enough to take corrective action. These activitiesdesign work, and project risks are reduced by suc- focus on preparing for the CDR, PRR (if required), andcessful technology developments and demonstrations. the SIR.Phase B culminates in a series of PDRs, containing thesystem-level PDR and PDRs for lower level end items Phase C contains a series of CDRs containing theas appropriate. The PDRs reflect the successive refine- system-level CDR and CDRs corresponding to the dif-ment of requirements into designs. (See the doctrine ferent levels of the system hierarchy. A CDR for eachof successive refinement in Subsection 4.4.1.2 and end item should be held prior to the start of fabrica-Figure 4.4-2.) Design issues uncovered in the PDRs tion/production for hardware and prior to the startshould be resolved so that final design can begin with of coding of deliverable software products. Typically,unambiguous design-to specifications. From this point the sequence of CDRs reflects the integration processon, almost all changes to the baseline are expected to that will occur in the next phase—that is, from lowerrepresent successive refinements, not fundamental level CDRs to the system-level CDR. Projects, how-changes. Prior to baselining, the system architecture, ever, should tailor the sequencing of the reviews topreliminary design, and ConOps must have been vali- meet the needs of the project. If there is a productiondated by enough technical analysis and design work run of products, a PRR will be performed to ensure theto establish a credible, feasible design in greater detail production plans, facilities, and personnel are ready tothan was sufficient for Phase A. begin production. Phase C culminates with an SIR. The final product of this phase is a product ready for inte- gration.3.6 Project Phase C: Final Design and Fabrication 3.7 Project Phase D: SystemDuring Phase C, activities are performed to establish a Assembly, Integration and Test,complete design (allocated baseline), fabricate or pro-duce hardware, and code software in preparation for Launchintegration. Trade studies continue. Engineering test During Phase D, activities are performed to assemble,units more closely resembling actual hardware are built integrate, test, and launch the system. These activitiesand tested to establish confidence that the design will focus on preparing for the FRR. Activities include as-function in the expected environments. Engineering sembly, integration, verification, and validation of thespecialty analysis results are integrated into the de- system, including testing the flight system to expectedsign, and the manufacturing process and controls are environments within margin. Other activities includedefined and validated. All the planning initiated back the initial training of operating personnel and imple-in Phase A for the testing and operational equipment, mentation of the logistics and spares planning. For flightprocesses and analysis, integration of the engineering projects, the focus of activities then shifts to prelaunchspecialty analysis, and manufacturing processes and integration and launch. Although all these activities arecontrols is implemented. Configuration management conducted in this phase of a project, the planning forcontinues to track and control design changes as de- these activities was initiated in Phase A. The planningtailed interfaces are defined. At each step in the succes- for the activities cannot be delayed until Phase D be-sive refinement of the final design, corresponding inte- gins because the design of the project is too advancedgration and verification activities are planned in greater to incorporate requirements for testing and operations.detail. During this phase, technical parameters, sched- Phase D concludes with a system that has been shownules, and budgets are closely tracked to ensure that to be capable of accomplishing the purpose for whichundesirable trends (such as an unexpected growth in it was created. NASA Systems Engineering Handbook  25
  • 41. 3.0 NASA Program/Project Life Cycle Phase C: Final Design and Fabrication Purpose To complete the detailed design of the system (and its associated subsystems, including its operations systems), fabri- cate hardware, and code software Typical Activities and Their Products  Update documents developed and baselined in Phase B  Update interface documents  Update mission operations plan based on matured ConOps  Update engineering specialty plans (e.g., contamination control plan, electromagnetic interference/electromagnetic compatibility control plan, reliability plan, quality control plan, parts management plan)  Augment baselined documents to reflect the growing maturity of the system, including the system architecture, WBS, and project plans  Update and baseline production plans  Refine integration procedures  Baseline logistics support plan  Add remaining lower level design specifications to the system architecture  Complete manufacturing and assembly plans and procedures  Establish and baseline build-to specifications (hardware and software) and drawings, verification and validation plans, and interface documents at all levels  Baseline detailed design report  Maintain requirements documents  Maintain verification and validation plans  Monitor project progress against project plans  Develop verification and validation procedures  Develop hardware and software detailed designs  Develop the system integration plan and the system operation plan  Develop the end-to-end information system design  Develop spares planning  Develop command and telemetry list  Prepare launch site checkout and operations plans  Prepare operations and activation plan  Prepare system decommissioning/disposal plan, including human capital transition, for use in Phase F  Finalize appropriate level safety data package  Develop preliminary operations handbook  Perform and archive trade studies  Fabricate (or code) the product  Perform testing at the component or subsystem level  Identify opportunities for preplanned product improvement  Baseline orbital debris assessment  Perform required Phase C technical activities from NPR 7120.5  Satisfy Phase C reviews’ entrance/success criteria from NPR 7123.1 Reviews  CDR  PRR  SIR  Safety review26  NASA Systems Engineering Handbook
  • 42. 3.7 Project Phase D: System Assembly, Integration and Test, Launch Phase D: System Assembly, Integration and Test, LaunchPurposeTo assemble and integrate the products and create the system, meanwhile developing confidence that it will be able tomeet the system requirements; conduct launch and prepare for operationsTypical Activities and Their Products Integrate and verify items according to the integration and verification plans, yielding verified components and (sub- systems) Monitor project progress against project plans Refine verification and validation procedures at all levels Perform system qualification verifications Perform system acceptance verifications and validation(s) (e.g., end-to-end tests encompassing all elements (i.e., space element, ground system, data processing system) Perform system environmental testing Assess and approve verification and validation results Resolve verification and validation discrepancies Archive documentation for verifications and validations performed Baseline verification and validation report Baseline “as-built” hardware and software documentation Update logistics support plan Document lessons learned Prepare and baseline operator’s manuals Prepare and baseline maintenance manuals Approve and baseline operations handbook Train initial system operators and maintainers Train on contingency planning Finalize and implement spares planning Confirm telemetry validation and ground data processing Confirm system and support elements are ready for flight Integrate with launch vehicle(s) and launch, perform orbit insertion, etc., to achieve a deployed system Perform initial operational verification(s) and validation(s) Perform required Phase D technical activities from NPR 7120.5 Satisfy Phase D reviews’ entrance/success criteria from NPR 7123.1Reviews TRR (at all levels) SAR (human space flight only) ORR FRR System functional and physical configuration audits Safety review NASA Systems Engineering Handbook  27
  • 43. 3.0 NASA Program/Project Life Cycle3.8 Project Phase E: Operations and the project life cycle starts over. For large flight projects, Sustainment there may be an extended period of cruise, orbit inser- tion, on-orbit assembly, and initial shakedown opera-During Phase E, activities are performed to conduct the tions. Near the end of the prime mission, the project mayprime mission and meet the initially identified need and apply for a mission extension to continue mission activi-maintain support for that need. The products of the phase ties or attempt to perform additional mission objectives.are the results of the mission. This phase encompassesthe evolution of the system only insofar as that evolutiondoes not involve major changes to the system architec- 3.9 Project Phase F: Closeoutture. Changes of that scope constitute new “needs,” and During Phase F, activities are performed to implement the systems decommissioning disposal planning and an- alyze any returned data and samples. The products of the Phase E: Operations and Sustainment phase are the results of the mission. Purpose To conduct the mission and meet the initially identi- Phase F deals with the final closeout of the system when fied need and maintain support for that need it has completed its mission; the time at which this oc- curs depends on many factors. For a flight system that Typical Activities and Their Products returns to Earth with a short mission duration, closeout  Conduct launch vehicle performance assessment may require little more than deintegration of the hard-  Conduct in-orbit spacecraft checkout ware and its return to its owner. On flight projects of long  Commission and activate science instruments duration, closeout may proceed according to established  Conduct the intended prime mission(s) plans or may begin as a result of unplanned events, such  Collect engineering and science data as failures. Refer to NPD 8010.3, Notification of Intent to  Train replacement operators and maintainers Decommission or Terminate Operating Space Systems and Terminate Missions for terminating an operating mis-  Train the flight team for future mission phases (e.g., planetary landed operations) sion. Alternatively, technological advances may make it uneconomical to continue operating the system either in  Maintain and approve operations and mainte- its current configuration or an improved one. nance logs  Maintain and upgrade the system  Address problem/failure reports Phase F: Closeout  Process and analyze mission data Purpose  Apply for mission extensions, if warranted, and con- To implement the systems decommissioning/dis- duct mission activities if awarded posal plan developed in Phase C and analyze any re-  Prepare for deactivation, disassembly, decommis- turned data and samples sioning as planned (subject to mission extension) Typical Activities and Their Products  Complete post-flight evaluation reports  Dispose of the system and supporting processes  Complete final mission report  Document lessons learned  Perform required Phase E technical activities from  Baseline mission final report NPR 7120.5  Archive data  Satisfy Phase E reviews’ entrance/success criteria from NPR 7123.1  Begin transition of human capital (if applicable)  Perform required Phase F technical activities from Reviews NPR 7120.5  PLAR  CERR  Satisfy Phase F reviews’ entrance/success criteria  PFAR (human space flight only) from NPR 7123.1  System upgrade review Reviews  Safety review  DR28  NASA Systems Engineering Handbook
  • 44. 3.9 Project Phase F: CloseoutTo limit space debris, NPR 8715.6, NASA Proce- 3.10 Funding: The Budget Cycledural Requirements for Limiting Orbital Debris pro-vides guidelines for removing Earth-orbiting robotic NASA operates with annual funding from Congress.satellites from their operational orbits at the end of This funding results, however, from a continuous rollingtheir useful life. For Low Earth Orbiting (LEO) mis- process of budget formulation, budget enactment, andsions, the satellite is usually deorbited. For small sat- finally, budget execution. NASA’s Financial Manage-ellites, this is accomplished by allowing the orbit to ment Requirements (FMR) Volume 4 provides the con-slowly decay until the satellite eventually burns up cepts, the goals, and an overview of NASA’s budgetin the Earth’s atmosphere. Larger, more massive sat- system of resource alignment referred to as Planning,ellites and observatories must be designed to demise Programming, Budgeting, and Execution (PPBE) andor deorbited in a controlled manner so that they can establishes guidance on the programming and bud-be safely targeted for impact in a remote area of the geting phases of the PPBE process, which are critical toocean. The Geostationary (GEO) satellites at 35,790 budget formulation for NASA. Volume 4 includes stra-km above the Earth cannot be practically deorbited, tegic budget planning and resources guidance, programso they are boosted to a higher orbit well beyond the review, budget development, budget presentation, andcrowded operational GEO orbit. justification of estimates to the Office of ManagementIn addition to uncertainty as to when this part of the and Budget (OMB) and to Congress. It also providesphase begins, the activities associated with safe closeout detailed descriptions of the roles and responsibilitiesof a system may be long and complex and may affect for key players in each step of the process. It consoli-the system design. Consequently, different options and dates current legal, regulatory, and administrative poli-strategies should be considered during the project’s ear- cies and procedures applicable to NASA. A highly sim-lier phases along with the costs and risks associated with plified representation of the typical NASA budget cyclethe different options. is shown in Figure 3.10-1. PLANNING PROGRAMMING BUDGETING EXECUTION Internal/External Pdm ) ( Program and Programmatic Operating Plan Studies and Resource and Institutional and Analysis Guidance Guidance Reprogramming NASA Program Monthly Strategic Analysis and OMB Budget Phasing Plan Alignment Plans Annual Institutional Analysis of President’s Performance Infrastructure Performance/ Budget Goals Analysis Expenditures Program Implementation Review/Issues Closeout Planning Book Strategic Program Performance and Planning Decision Appropriation Accountability Guidance Memorandum Report Figure 3.10‑1 Typical NASA budget cycle NASA Systems Engineering Handbook  29
  • 45. 3.0 NASA Program/Project Life CycleNASA typically starts developing its budget each Feb- sional process generally lasts through the summer. Inruary with economic forecasts and general guidelines as recent years, however, final bills have often been de-identified in the most recent President’s budget. By late layed past the start of the fiscal year on October 1. InAugust, NASA has completed the planning, program- those years, NASA has operated on continuing resolu-ming, and budgeting phases of the PPBE process and tion by Congress.prepares for submittal of a preliminary NASA budgetto the OMB. A final NASA budget is submitted to the With annual funding, there is an implicit funding con-OMB in September for incorporation into the Pres- trol gate at the beginning of every fiscal year. While theseident’s budget transmittal to Congress, which gener- gates place planning requirements on the project andally occurs in January. This proposed budget is then can make significant replanning necessary, they are notsubjected to congressional review and approval, cul- part of an orderly systems engineering process. Rather,minating in the passage of bills authorizing NASA to they constitute one of the sources of uncertainty that af-obligate funds in accordance with congressional stip- fect project risks, and they are essential to consider inulations and appropriating those funds. The congres- project planning.30  NASA Systems Engineering Handbook
  • 46. 4.0 System DesignThis chapter describes the activities in the system de- with a study team collecting and clarifying the stake-sign processes listed in Figure 2.1-1. The chapter is sepa- holder expectations, including the mission objectives,rated into sections corresponding to steps 1 to 4 listed constraints, design drivers, operational objectives, andin Figure 2.1-1. The processes within each step are dis- criteria for defining mission success. This set of stake-cussed in terms of inputs, activities, and outputs. Addi- holder expectations and high-level requirements is usedtional guidance is provided using examples that are rel- to drive an iterative design loop where a strawman ar-evant to NASA projects. The system design processes are chitecture/design, the concept of operations, and de-four interdependent, highly iterative and recursive pro- rived requirements are developed. These three productscesses, resulting in a validated set of requirements and a must be consistent with each other and will require it-validated design solution that satisfies a set of stakeholder erations and design decisions to achieve this consistency.expectations. The four system design processes are to de- Once consistency is achieved, analyses allow the projectvelop stakeholder expectations, technical requirements, team to validate the design against the stakeholder ex-logical decompositions, and design solutions. pectations. A simplified validation asks the questions: Does the system work? Is the system safe and reliable? IsFigure 4.0-1 illustrates the recursive relationship among the system achievable within budget and schedule con-the four system design processes. These processes start straints? If the answer to any of these questions is no, Stakeholder Expectations Trade Studies and Iterative Design Loop Mission Objectives & Mission Start Constraints Authority Derived and Design and Allocated Functional Product Requirements Operational High-Level and Logical Breakdown � Functional Objectives Requirements � Performance Decomposition Structure � Interface � Operational � “Ilities” Mission Success Criteria ConOps Functional & No − Next Level Su cient Performance depth? Analysis Legend: Yes No Stakeholder Expectations De nition Work? Rebaseline No Yes Select Technical Requirements De nition Safe & reliable? Yes requirements? A ordable? Baseline Logical Decomposition Design Solution De nition Decision Analysis Figure 4.0‑1 Interrelationships among the system design processes NASA Systems Engineering Handbook  31
  • 47. 4.0 System Designthen changes to the design or stakeholder expectations team. The next sections describe each of the four systemwill be required, and the process started again. This pro- design processes and their associated products for acess continues until the system—architecture, ConOps, given NASA mission.and requirements—meets the stakeholder expectations.The depth of the design effort must be sufficient to allowanalytical verification of the design to the requirements. System Design KeysThe design must be feasible and credible when judged  Successfully understanding and defining the mis-by a knowledgeable independent review team and must sion objectives and operational concepts are keyshave sufficient depth to support cost modeling. to capturing the stakeholder expectations, which will translate into quality requirements over the lifeOnce the system meets the stakeholder expectations, the cycle of the project.study team baselines the products and prepares for the  Complete and thorough requirements traceabilitynext phase. Often, intermediate levels of decomposition is a critical factor in successful validation of require-are validated as part of the process. In the next level of ments.decomposition, the baselined derived (and allocated) re-  Clear and unambiguous requirements will helpquirements become the set of high-level requirements avoid misunderstanding when developing thefor the decomposed elements and the process begins overall system and when making major or minoragain. These system design processes are primarily ap- changes.plied in Pre-Phase A and continue through Phase C.  Document all decisions made during the develop- ment of the original design concept in the techni-The system design processes during Pre-Phase A focus cal data package. This will make the original designon producing a feasible design that will lead to Formula- philosophy and negotiation results available totion approval. During Phase A, alternative designs and assess future proposed changes and modificationsadditional analytical maturity are pursued to optimize against.the design architecture. Phase B results in a prelimi-  The design solution verification occurs when annary design that satisfies the approval criteria. During acceptable design solution has been selected andPhase C, detailed, build-to designs are completed. documented in a technical data package. The de- sign solution is verified against the system require-This has been a simplified description intended to dem- ments and constraints. However, the validation ofonstrate the recursive relationship among the system de- a design solution is a continuing recursive and it-sign processes. These processes should be used as guid- erative process during which the design solution isance and tailored for each study team depending on the evaluated against stakeholder expectations.size of the project and the hierarchical level of the study32  NASA Systems Engineering Handbook
  • 48. 4.1 Stakeholder Expectations DefinitionThe Stakeholder Expectations Definition Process is the ini-  Upper Level Requirements and Expectations: Thesetial process within the SE engine that establishes the foun- would be the requirements and expectations (e.g.,dation from which the system is designed and the product needs, wants, desires, capabilities, constraints, ex-is realized. The main purpose of this process is to identify ternal interfaces) that are being flowed down to a par-who the stakeholders are and how they intend to use the ticular system of interest from a higher level (e.g., pro-product. This is usually accomplished through use-case sce- gram, project, etc.).narios, Design Reference Missions (DRMs), and ConOps.  Identified Customers and Stakeholders: The organi- zation or individual who has requested the product(s)4.1.1 Process Description and those who are affected by or are in some way ac-Figure 4.1-1 provides a typical flow diagram for the countable for the product’s outcome.Stakeholder Expectations Definition Process and identi-fies typical inputs, outputs, and activities to consider in 4.1.1.2 Process Activitiesaddressing stakeholder expectations definition. Identifying Stakeholders Advocacy for new programs and projects may originate in4.1.1.1 Inputs many organizations. These include Presidential directives,Typical inputs needed for the Stakeholder Expectations Congress, NASA Headquarters (HQ), the NASA Centers,Definition Process would include the following: NASA advisory committees, the National Academy of Sci- To Technical Requirements De nition and Establish list of stakeholders Requirements Management and Interface Management Processes Elicit stakeholder expectations Validated Stakeholder From project Expectations Initial Customer Establish operations concept and support Expectations strategies To Technical Requirements De nition and Con guration Define stakeholder expectations in acceptable Management Processes Other Stakeholder statements Expectations Concept of Operations Analyze expectation statements for measures From Design Solution of effectiveness De nition (recursive loop) and Requirements Management and Enabling ProductInterface Management Processes Validate that defined expectation statements Support Strategies reflect bidirectional traceability Customer Flowdown Requirements To Technical Requirements Obtain stakeholder commitments to the De nition and Technical Data validated set of expectations Management Processes Measures of Effectiveness Baseline stakeholder expectations Figure 4.1‑1 Stakeholder Expectations Definition Process NASA Systems Engineering Handbook  33
  • 49. 4.0 System Designences, the National Space Council, and many other groups performance objectives, or other less obvious quantitiesin the science and space communities. These organizations such as organizational needs or geopolitical goals.are commonly referred to as stakeholders. A stakeholder is Figure 4.1-2 shows the type of information needed whena group or individual who is affected by or is in some wayaccountable for the outcome of an undertaking. defining stakeholder expectations and depicts how the information evolves into a set of high-level require-Stakeholders can be classified as customers and other ments. The yellow paths depict validation paths. Exam-interested parties. Customers are those who will receive ples of the types of information that would be definedthe goods or services and are the direct beneficiaries of during each step are also provided.the work. Examples of customers are scientists, projectmanagers, and subsystems engineers. Defining stakeholder expectations begins with the mis- sion authority and strategic objectives that the mission isOther interested parties are those who affect the project meant to achieve. Mission authority changes dependingby providing broad, overarching constraints within on the category of the mission. For example, science mis-which the customers’ needs must be achieved. These par- sions are usually driven by NASA Science Mission Di-ties may be affected by the resulting product, the manner rectorate strategic plans; whereas the exploration mis-in which the product is used, or have a responsibility for sions may be driven by a Presidential directive.providing life-cycle support services. Examples includeCongress, advisory planning teams, program managers, An early task in defining stakeholder expectations isusers, operators, maintainers, mission partners, and understanding the objectives of the mission. Clearly de-NASA contractors. It is important that the list of stake- scribing and documenting them helps ensure that theholders be identified early in the process, as well as the project team is working toward a common goal. Theseprimary stakeholders who will have the most significant objectives form the basis for developing the mission, soinfluence over the project. they need to be clearly defined and articulated.Identifying Stakeholder Expectations Defining the objectives is done by eliciting the needs,Stakeholder expectations, the vision of a particular stake- wants, desires, capabilities, external interfaces, assump-holder individual or group, result when they specify what is tions, and constraints from the stakeholders. Arrivingdesired as an end state or as an item to be produced and put at an agreed-to set of objectives can be a long and ar-bounds upon the achievement of the goals. These bounds duous task. The proactive iteration with the stakeholdersmay encompass expenditures (resources), time to deliver, throughout the systems engineering process is the way Mission Mission Operational Success Design Authority Objectives Objectives Criteria Drivers � Agency Strategic � Science Objectives Operational Drivers Measurements Mission Drivers Plans � Exploration � Integration and Test � What measurements? � Launch Date � Announcements of Objectives � Launch � How well? � Mission Duration Opportunity � Technology � On-Orbit � Orbit � Road Maps Demonstration � Transfer �... � Directed Missions Objectives � Surface � Technology � Science Data Explorations Development Distribution � What explorations? Objectives �... � What goals? � Programmatic Objectives Figure 4.1‑2 Product flow for stakeholder expectations34  NASA Systems Engineering Handbook
  • 50. 4.1 Stakeholder Expectations Definitionthat all parties can come to a true understanding of what ments, desires, and requirements of the eventual users ofshould be done and what it takes to do the job. It is im- the system. The high-level requirements and success cri-portant to know who the primary stakeholders are and teria are examples of the products representing the con-who has the decision authority to help resolve conflicts. sensus of the stakeholders.The project team should also identify the constraints 4.1.1.3 Outputsthat may apply. A constraint is a condition that must be Typical outputs for capturing stakeholder expectationsmet. Sometimes a constraint is dictated by external fac- would include the following:tors such as orbital mechanics or the state of technology;sometimes constraints are the result of the overall budget  Top-Level Requirements and Expectations: Theseenvironment. It is important to document the constraints would be the top-level requirements and expectationsand assumptions along with the mission objectives. (e.g., needs, wants, desires, capabilities, constraints, and external interfaces) for the product(s) to be de-Operational objectives also need to be included in de- veloped.fining the stakeholder expectations. The operational ob-jectives identify how the mission must be operated to  ConOps: This describes how the system will be oper-achieve the mission objectives. ated during the life-cycle phases to meet stakeholder expectations. It describes the system characteris-The mission and operational success criteria define what tics from an operational perspective and helps facili-the mission must accomplish to be successful. This will tate an understanding of the system goals. Examplesbe in the form of a measurement concept for science would be the ConOps document or a DRM.missions and exploration concept for human explora-tion missions. The success criteria also define how well 4.1.2 Stakeholder Expectations Definitionthe concept measurements or exploration activities must Guidancebe accomplished. The success criteria capture the stake-holder expectations and, along with programmatic re- 4.1.2.1 Concept of Operationsquirements and constraints, are used within the high- The ConOps is an important component in capturinglevel requirements. stakeholder expectations, requirements, and the archi- tecture of a project. It stimulates the development ofThe design drivers will be strongly dependent upon the the requirements and architecture related to the userConOps, including the operational environment, orbit, elements of the system. It serves as the basis for subse-and mission duration requirements. For science mis- quent definition documents such as the operations plan,sions, the design drivers may include, at a minimum, the launch and early orbit plan, and operations handbookmission launch date, duration, and orbit. If alternative and provides the foundation for the long-range opera-orbits are to be considered, a separate concept is needed tional planning activities such as operational facilities,for each orbit. Exploration missions must consider the staffing, and network scheduling.destination, the duration, the operational sequence (andsystem configuration changes), and the in situ explora- The ConOps is an important driver in the system re-tion activities that allow the exploration to succeed. quirements and therefore must be considered early in the system design processes. Thinking through theThe end result of this step is the discovery and delineation ConOps and use cases often reveals requirements andof the system’s goals, which generally express the agree- design functions that might otherwise be overlooked. A simple example to illustrate this point is adding system Note: It is extremely important to involve stakehold- requirements to allow for communication during a par- ers in all phases of a project. Such involvement should ticular phase of a mission. This may require an additional be built in as a self-correcting feedback loop that will antenna in a specific location that may not be required significantly enhance the chances of mission success. during the nominal mission. Involving stakeholders in a project builds confidence in the end product and serves as a validation and ac- The ConOps is important for all projects. For science ceptance with the target audience. projects, the ConOps describes how the systems will be operated to achieve the measurement set required for a NASA Systems Engineering Handbook  35
  • 51. 4.0 System Designsuccessful mission. They are usually driven by the data faces. For exploration missions, multiple DRMs make upvolume of the measurement set. The ConOps for explo- a ConOps. The design and performance analysis leadingration projects is likely to be more complex. There are to the requirements must satisfy all of them. Figure 4.1-3typically more operational phases, more configurationchanges, and additional communication links requiredfor human interaction. For human spaceflight, functions Develop Develop Define project organi- operationaland objectives must be clearly allocated between human operations zational flightoperators and systems early in the project. timeline responsi- segment bilities drivers DevelopThe ConOps should consider all aspects of operations operational require-including integration, test, and launch through disposal. ments Develop Identify Define operational operationalTypical information contained in the ConOps includes configura- operational ground facilities segmenta description of the major phases; operation timelines; tions driversoperational scenarios and/or DRM; end-to-end commu-nications strategy; command and data architecture; op- Define Defineerational facilities; integrated logistic support (resupply, Develop end-to-end operationalmaintenance, and assembly); and critical events. The op- critical communi- launch events cation segmenterational scenarios describe the dynamic view of the sys- links driverstems’ operations and include how the system is perceivedto function throughout the various modes and mode Figure 4.1‑3 Typical ConOps development for atransitions, including interactions with external inter- science mission S-Band: TRK, Cmd, and HK Tlm S-Band: Ground Site #2 HK Tlm, TRK Data External S-Band: S-Band TRK, Cmd, and HK Tlm Ground System Tracking Station Acquisition (Including Short-Term S-Band Data Storage) Data Same Interfaces Cmd S-Band: Ka-Band: Ka-Band as Prime Ground Site TRK, Cmd, 150 Mbps Ground System Science Data and HK Tlm Ground Site #1 Observatory Commands Mission Operations Center Ka-Band: S-Band Acquisition Data 150 Mbps Ground System Science Data (Including Short-Term Station Control Telemetry and Flight Dynamics S-Band Data Storage) Command System System Observatory Housekeeping Telemetry Orbit Determination ASIST/FEDS Ka-Band Tracking Data Manuever Planning Telemetry Monitoring Ground System Station Status Product Generation Command Management HK Data Archival R/T Attitude Determination Ka Sensor/Actuator Calibration Science HK Level-0 Processing Data Automated Operations DDS Control Anomaly Detection Mission Planning Data Distribution ahd Scheduling System (Including Short- DDS Status Plan daily/periodic events Instrument Term Science Create engineering plan #1 Data Storage) Generate daily loads Science Data 55 Mbps Instrument Instrument #3 Ground Station #2 Control System Trending Science Data Science Data 2 Mbps 58 Mbps R/T Housekeeping Telemetry Instrument DDS Alert Noti cation SOC R/T Housekeeping Telemetry Control System System Instrument R/T Housekeeping Telemetry SOC Instrument SOC Flight Software Memory Dumps Simulated Science Planning Commands and FDS Products Flight Software Maintenance Lab Instrument Commands/Leads Figure 4.1‑4 Example of an associated end‑to‑end operational architecture36  NASA Systems Engineering Handbook
  • 52. 4.1 Stakeholder Expectations Definitionillustrates typical information included in the ConOpsfor a science mission, and Figure 4.1-4 is an example of Integration and Testan end-to-end operational architecture. For more infor- Launch Operationsmation about developing the ConOps, see ANSI/AIAA LEO OperationsG-043-1992, Guide for the Preparation of Operational Lunar Transfer OperationsConcept Documents. Lunar Orbit OperationsThe operation timelines provide the basis for definingsystem configurations, operational activities, and other Lunar Surface Operationssequenced related elements necessary to achieve the Earth Transfer Operationsmission objectives for each operational phase. It de- Reentry and Landing Operationsscribes the activities, tasks, and other sequenced related 0 1 2 3 4elements necessary to achieve the mission objectives in Elapsed Time (Weeks)each of the phases. Depending on the type of project Figure 4.1‑5a Example of a lunar sortie timeline(science, exploration, operational), the timeline could developed early in the life cyclebecome quite complex.The timeline matures along with the design. It starts asa simple time-sequenced order of the major events and line later in the life cycle for a science mission is shownmatures into a detailed description of subsystem oper- in Figure 4.1-6.ations during all major mission modes or transitions.An example of a lunar sortie timeline and DRM early in An important part of the ConOps is defining the op-the life cycle are shown in Figures 4.1-5a and b, respec- erational phases, which will span project Phases D, E,tively. An example of a more detailed, integrated time- and F. The operational phases provide a time-sequenced Moon Ascent Stage LSAM Performs Lunar Orbit Injection Expended 100 km Low Lunar Orbit Earth Departure Stage Expended Low Earth Orbit Lunar Surface Access Module Direct or Skip (LSAM) Crew Exploration Vehicle Land Entry Earth Departure Stage Earth Figure 4.1‑5b Example of a lunar sortie DRM early in the life cycle NASA Systems Engineering Handbook  37
  • 53. 4.0 System Designstructure for defining the configuration changes and op- facilities, equipment, and critical events should also beerational activities needed to be carried out to meet the included. Table 4.1-1 identifies some common examplesgoals of the mission. For each of the operational phases, of operational phases for a NASA mission. 4/18/08 9:59:13Z 10:59:13Z Separation: 11:30:58Z Launch 12:00Z 12:30Z (Hrs) L-0.5 Launch Sep L+1 L+1.5 Note: Nominally Acquire TDRS Pre-Sep and Stay on Until Acquisition at DongaraCoverage Dongara, Australia - 9.4 hrs (11:45:38Z - 21:00:23z) USN Sites Overburg, S. Africa - 25 mins (11:30:46Z - 11:56:04Z) TDRs TDRS: Prime at Sep Then Backup/Contingency Only Fairing MECO SECO-1 SeparationAtlas EELV Jettison Coast L-31 mins 45 secs & 1RPM Roll Altitude = 300 km SECO-2 LV to Sep Attitude IRUs On Sun AcquisitionGN&C ACE-A On RWAs Powered On Acquire Sun Complete ACE-B Powered On RWAs O Null Rates STs O (Wheels) DSS O ACS thr ISo Valves Open Control Mode Sun Acquisition ModePropulsion (If High Tipo Rates, Delta-H If Needed by Ground Command Only)C&DH/RF 2 kbps Command Uplink Recorder Record 16 kbps Dump Recorder S-Downlink 64 kbps 2 kbps 2 kbps on Carrier 240 kbps on Carrier 64 kbps on Subcarrier/Ranging on Carrier Hardline Hardline S-XMTR Start XMTR On XMTR on Noncoherent XMTR on Coherent Sequencer (RTS) via Stored CommandPower/ 100% SOC >95% SOC Detect Separation Then: Power Positive - Initiate SA DeploymentElectricity - Power on RWAs Charge Battery L-5 mins 1095 W @ 10 Amps 846 W SAS O Go to Internal Power 804 W 616 W 656 W S/C Load 258 W 293 W 551 WDeployables SADS & HGADS Damper Heaters On Solar Array Deployment GCE Powered OnThermal Survival Heaters Enabled Prep Lines & HGA Survival Heaters Come On HMI EB & OP OInstruments AIA EB & Op O Decontamination Heaters On EVE EB & Op OGround 1 Hour Launch Window >95% Battery SOC Launch Criteria - Once on internal Power, >95% Allows >15 minutes Before Launch or Scrub MOC FDF Receives MECO State Vector From EELV Update on-board EPV Figure 4.1‑6 Example of a more detailed, integrated timeline later in the life cycle for a science mission38  NASA Systems Engineering Handbook
  • 54. 4.1 Stakeholder Expectations Definition Table 4.1‑1 Typical Operational Phases for a NASA MissionOperational Phase DescriptionIntegration and test Project Integration and Test: During the latter period of project integration and test, the systemoperations is tested by performing operational simulations during functional and environmental testing. The simulations typically exercise the end-to-end command and data system to provide a complete veri- fication of system functionality and performance against simulated project operational scenarios. Launch Integration: The launch integration phase may repeat integration and test operational and functional verification in the launch-integrated configuration.Launch operations Launch: Launch operation occurs during the launch countdown, launch ascent, and orbit injection. Critical event telemetry is an important driver during this phase. Deployment: Following orbit injection, spacecraft deployment operations reconfigure the space- craft to its orbital configuration. Typically, critical events covering solar array, antenna, and other deployments and orbit trim maneuvers occur during this phase. In‑Orbit Checkout: In-orbit checkout is used to perform a verification that all systems are healthy. This is followed by on-orbit alignment, calibration, and parameterization of the flight systems to prepare for science operations.Science operations The majority of the operational lifetime is used to perform science operations.Safe-hold As a result of on-board fault detection or by ground command, the spacecraft may transition to aoperations safe-hold mode. This mode is designed to maintain the spacecraft in a power positive, thermally stable state until the fault is resolved and science operations can resume.Anomaly resolution Anomaly resolution and maintenance operations occur throughout the mission. They may requireand maintenance resources beyond established operational resources.operationsDisposal operations Disposal operations occur at the end of project life. These operations are used to either provide a controlled reentry of the spacecraft or a repositioning of the spacecraft to a disposal orbit. In the latter case, the dissipation of stored fuel and electrical energy is required. NASA Systems Engineering Handbook  39
  • 55. 4.0 System Design4.2 Technical Requirements DefinitionThe Technical Requirements Definition Process trans- level product/component requirements (e.g., PBS modelforms the stakeholder expectations into a definition of products such as systems or subsystems and related en-the problem and then into a complete set of validated abling products such as external systems that provide ortechnical requirements expressed as “shall” statements consume data). The requirements should enable the de-that can be used for defining a design solution for the scription of all inputs, outputs, and required relationshipsProduct Breakdown Structure (PBS) model and related between inputs and outputs. The requirements documentsenabling products. The process of requirements definition organize and communicate requirements to the customeris a recursive and iterative one that develops the stake- and other stakeholders and the technical community.holders’ requirements, product requirements, and lower Technical requirements definition activities apply to the definition of all technical requirements from the pro- It is important to note that the team must not rely gram, project, and system levels down to the lowest level solely on the requirements received to design and product/component requirements document. build the system. Communication and iteration with the relevant stakeholders are essential to ensure a 4.2.1 Process Description mutual understanding of each requirement. Other- Figure 4.2-1 provides a typical flow diagram for the wise, the designers run the risk of misunderstanding Technical Requirements Definition Process and identi- and implementing an unwanted solution to a differ- ent interpretation of the requirements. fies typical inputs, outputs, and activities to consider in addressing technical requirements definition. Analyze scope of problem From Stakeholder Expectations De nition To Logical Decomposition and Con guration Define design and Define functional and and Requirements Management Processes product constraints behavioral expectation in Management and Interface technical terms Management Processes Baselined Stakeholder Expectations Validated Technical Define performance Requirements requirements for each Baselined Concept of defined functional and Operations behavioral expectation To Logical Decomposition and Technical Data Management Processes Baselined Enabling Support Strategies Measures of Define technical require- Performance ments in acceptable From Stakeholder “shall” statements Expectations De nition To Technical and Technical Data Assessment Process Management Processes Technical Performance Measures of Validate technical Define measures of Measures Effectiveness requirements performance for each measure of effectiveness Establish technical Define technical requirements baseline performance measures Figure 4.2‑1 Technical Requirements Definition Process40  NASA Systems Engineering Handbook
  • 56. 4.2 Technical Requirements Definition4.2.1.1 Inputs guidance on how to write good requirements and Ap-Typical inputs needed for the requirements process pendix E for validating requirements. A well-writtenwould include the following: requirements document provides several specific bene- fits to both the stakeholders and the technical team, as Top-Level Requirements and Expectations: These shown in Table 4.2-1. would be the agreed-to top-level requirements and expectations (e.g., needs, wants, desires, capabilities, 4.2.1.3 Outputs constraints, external interfaces) for the product(s) to be developed coming from the customer and other Typical outputs for the Technical Requirements Defini- stakeholders. tion Process would include the following: Concept of Operations: This describes how the  Technical Requirements: This would be the approved system will be operated during the life-cycle phases to set of requirements that represents a complete descrip- meet stakeholder expectations. It describes the system tion of the problem to be solved and requirements that characteristics from an operational perspective and have been validated and approved by the customer and helps facilitate an understanding of the system goals. stakeholders. Examples of documentation that capture Examples would be a ConOps document or a DRM. the requirements are a System Requirements Docu- ment (SRD), Project Requirements Document (PRD),4.2.1.2 Process Activities Interface Requirements Document (IRD), etc.The top-level requirements and expectations are initial-  Technical Measures: An established set of measuresly assessed to understand the technical problem to be based on the expectations and requirements that willsolved and establish the design boundary. This bound- be tracked and assessed to determine overall systemary is typically established by performing the following or product effectiveness and customer satisfaction.activities: Common terms for these measures are Measures of Effectiveness (MOEs), Measures of Performance Defining constraints that the design must adhere to or (MOPs), and Technical Performance Measures how the system will be used. The constraints are typically (TPMs). See Section 6.7 for further details. not able to be changed based on tradeoff analyses. Identifying those elements that are already under de- 4.2.2 Technical Requirements Definition sign control and cannot be changed. This helps es- Guidance tablish those areas where further trades will be per- formed to narrow potential design solutions. 4.2.2.1 Types of Requirements Establishing physical and functional interfaces (e.g., A complete set of project requirements includes the mechanical, electrical, thermal, human, etc.) with functional needs requirements (what functions need to which the system must interact. be performed), performance requirements (how well Defining functional and behavioral expectations for these functions must be performed), and interface re- the range of anticipated uses of the system as identified quirements (design element interface requirements). For in the ConOps. The ConOps describes how the system space projects, these requirements are decomposed and will be operated and the possible use-case scenarios. allocated down to design elements through the PBS.With an overall understanding of the constraints, phys- Functional, performance, and interface requirementsical/functional interfaces, and functional/behavioral ex- are very important but do not constitute the entire setpectations, the requirements can be further defined by of requirements necessary for project success. The spaceestablishing performance criteria. The performance is segment design elements must also survive and con-expressed as the quantitative part of the requirement to tinue to perform in the project environment. These en-indicate how well each product function is expected to vironmental drivers include radiation, thermal, acoustic, mechanical loads, contamination, radio frequency, andbe accomplished. others. In addition, reliability requirements drive designFinally, the requirements should be defined in accept- choices in design robustness, failure tolerance, and re-able “shall” statements, which are complete sentences dundancy. Safety requirements drive design choices inwith a single “shall” per statement. See Appendix C for providing diverse functional redundancy. Other spe- NASA Systems Engineering Handbook  41
  • 57. 4.0 System Design Table 4.2‑1 Benefits of Well‑Written Requirements Benefit Rationale Establish the basis for agree- The complete description of the functions to be performed by the product specified in the ment between the stakehold- requirements will assist the potential users in determining if the product specified meets ers and the developers on their needs or how the product must be modified to meet their needs. During system what the product is to do design, requirements are allocated to subsystems (e.g., hardware, software, and other major components of the system), people, or processes. Reduce the development The Technical Requirements Definition Process activities force the relevant stakeholders effort because less rework is to consider rigorously all of the requirements before design begins. Careful review of the required to address poorly requirements can reveal omissions, misunderstandings, and inconsistencies early in the written, missing, and misun- development cycle when these problems are easier to correct thereby reducing costly derstood requirements redesign, remanufacture, recoding, and retesting in later life-cycle phases. Provide a basis for estimating The description of the product to be developed as given in the requirements is a realistic costs and schedules basis for estimating project costs and can be used to evaluate bids or price estimates. Provide a baseline for valida- Organizations can develop their validation and verification plans much more productively tion and verification from a good requirements document. Both system and subsystem test plans and proce- dures are generated from the requirements. As part of the development, the requirements document provides a baseline against which compliance can be measured. The require- ments are also used to provide the stakeholders with a basis for acceptance of the system. Facilitate transfer The requirements make it easier to transfer the product to new users or new machines. Stakeholders thus find it easier to transfer the product to other parts of their organization, and developers find it easier to transfer it to new stakeholders or reuse it. Serve as a basis for enhance- The requirements serve as a basis for later enhancement or alteration of the finished ment product.cialty requirements also may affect design choices. Thesemay include producibility, maintainability, availability, Functional requirements define what functions needupgradeability, human factors, and others. Unlike func- to be done to accomplish the objectives.tional needs requirements, which are decomposed and Performance requirements define how well the sys-allocated to design elements, these requirements are tem needs to perform the functions.levied across major project elements. Designing to meetthese requirements requires careful analysis of designalternatives. Figure 4.2-2 shows the characteristics of tional and performance requirements are allocated tofunctional, operational, reliability, safety, and specialty functional partitions and subfunctions, objects, people,requirements. Top-level mission requirements are gener- or processes. Sequencing of time-critical functions isated from mission objectives, programmatic constraints, considered. Each function is identified and describedand assumptions. These are normally grouped into func- in terms of inputs, outputs, and interface requirementstion and performance requirements and include the cat- from the top down so that the decomposed functions areegories of requirements in Figure 4.2-2. recognized as part of larger functional groupings. Func- tions are arranged in a logical sequence so that any speci-Functional Requirements fied operational usage of the system can be traced in anThe functional requirements need to be specified for end-to-end path to indicate the sequential relationship ofall intended uses of the product over its entire lifetime. all functions that must be accomplished by the system.Functional analysis is used to draw out both functionaland performance requirements. Requirements are par- It is helpful to walk through the ConOps and scenariostitioned into groups, based on established criteria (e.g., asking the following types of questions: what functionssimilar functionality, performance, or coupling, etc.), need to be performed, where do they need to be per-to facilitate and focus the requirements analysis. Func- formed, how often, under what operational and environ-42  NASA Systems Engineering Handbook
  • 58. 4.2 Technical Requirements Definition Technical Requirements – Example of Functional and Performance Allocation Hierarchically to PBS Requirements Functional Requirements Performance Requirements Initial Function Statement Interface Requirements The Thrust Vector Controller (TVC) shall provide vehi- cle control about the pitch and yaw axes. Operational Requirements – This statement describes a high-level function that Drive Functional Requirements the TVC must perform. The technical team needs to Mission Timeline Sequence transform this statement into a set of design-to func- Mission Configurations Command and Telemetry Strategy tional and performance requirements. Functional Requirements with Associated Reliability Requirements – Project Standards – Performance Requirements Levied Across Systems  The TVC shall gimbal the engine a maximum of Mission Environments 9 degrees, ± 0.1 degree. Robustness, Fault Tolerance, Diverse Redundancy Verification  The TVC shall gimbal the engine at a maximum rate Process and Workmanship of 5 degrees/second ± 0.3 degrees/second.  The TVC shall provide a force of 40,000 pounds, Safety Requirements – Project Standards – ± 500 pounds. Levied Across Systems  The TVC shall have a frequency response of 20 Hz, Orbital Debris and Reentry ± 0.1 Hz. Planetary Protection Toxic Substances Pressurized Vessels Radio Frequency Energy requests) or environmental conditions, for what dura- System Safety tion, at what range of values, at what tolerance, and at … what maximum throughput or bandwidth capacity. Specialty Requirements – Project Standards – Drive Product Designs Be careful not to make performance requirements too Producibility restrictive. For example, for a system that must be able Maintainability to run on rechargeable batteries, if the performance re- Asset Protection quirements specify that the time to recharge should be … less than 3 hours when a 12-hour recharge time would be sufficient, potential design solutions are eliminated. In the same sense, if the performance requirements Figure 4.2‑2 Characteristics of functional, specify that a weight must be within ±0.5 kg, when operational, reliability, safety, and specialty ±2.5 kg is sufficient, metrology cost will increase with- requirements out adding value to the product.mental conditions, etc. Thinking through this processoften reveals additional functional requirements. Wherever possible, define the performance requirements in terms of (1) a threshold value (the minimum accept- able value needed for the system to carry out its mission)Performance Requirements and (2) the baseline level of performance desired. Speci-Performance requirements quantitatively define how fying performance in terms of thresholds and baselinewell the system needs to perform the functions. Again, requirements provides the system designers with tradewalking through the ConOps and the scenarios often space in which to investigate alternative designs.draws out the performance requirements by asking thefollowing types of questions: how often and how well, All qualitative performance expectations must be ana-to what accuracy (e.g., how good does the measure- lyzed and translated into quantified performance require-ment need to be), what is the quality and quantity of the ments. Trade studies often help quantify performanceoutput, under what stress (maximum simultaneous data requirements. For example, tradeoffs can show whether NASA Systems Engineering Handbook  43
  • 59. 4.0 System Designa slight relaxation of the performance requirement could launch, deployment, and normal operations from begin-produce a significantly cheaper system or whether a few ning of life to end of life. Requirements derived from themore resources could produce a significantly more effec- mission environments should be included in the systemtive system. The rationale for thresholds and goals should requirements.be documented with the requirements to understand thereason and origin for the performance requirement in External and internal environment concerns that mustcase it must be changed. The performance requirements be addressed include acceleration, vibration, shock, staticthat can be quantified by or changed by tradeoff analysis loads, acoustic, thermal, contamination, crew-induced loads, total dose radiation/radiation effects, Single-Eventshould be identified. See Section 6.8, Decision Analysis, Effects (SEEs), surface and internal charging, orbital de-for more information on tradeoff analysis. bris, atmospheric (atomic oxygen) control and quality, attitude control system disturbance (atmospheric drag,Interface Requirements gravity gradient, and solar pressure), magnetic, pressureIt is important to define all interface requirements for the gradient during launch, microbial growth, and radio fre-system, including those to enabling systems. The external quency exposure on the ground and on orbit.interfaces form the boundaries between the product andthe rest of the world. Types of interfaces include: operational The requirements structure must address the specialtycommand and control, computer to computer, mechanical, engineering disciplines that apply to the mission envi-electrical, thermal, and data. One useful tool in defining in- ronments across project elements. These discipline areasterfaces is the context diagram (see Appendix F), which de- levy requirements on system elements regarding Electro-picts the product and all of its external interfaces. Once the magnetic Interference, Electromagnetic Compatibilityproduct components are defined, a block diagram showing (EMI/EMC), grounding, radiation and other shielding,the major components, interconnections, and external in- contamination protection, and reliability.terfaces of the system should be developed to define boththe components and their interactions. Reliability Requirements Reliability can be defined as the probability that a device,Interfaces associated with all product life-cycle phases product, or system will not fail for a given period of timeshould also be considered. Examples include interfaces under specified operating conditions. Reliability is an in-with test equipment; transportation systems; Integrated herent system design characteristic. As a principal con-Logistics Support (ILS) systems; and manufacturing fa- tributing factor in operations and support costs and incilities, operators, users, and maintainers. system effectiveness, reliability plays a key role in deter-As the technical requirements are defined, the interface mining the system’s cost-effectiveness.diagram should be revisited and the documented inter- Reliability engineering is a major specialty discipline thatface requirements refined to include newly identified in- contributes to the goal of a cost-effective system. This isterfaces information for requirements both internal and primarily accomplished in the systems engineering pro-external. More information regarding interfaces can be cess through an active role in implementing specific de-found in Section 6.3. sign features to ensure that the system can perform in the predicted physical environments throughout the mis-Environmental Requirements sion, and by making independent predictions of systemEach space mission has a unique set of environmental reliability for design trades and for test program, opera-requirements that apply to the flight segment elements. tions, and integrated logistics support planning.It is a critical function of systems engineering to identifythe external and internal environments for the partic- Reliability requirements ensure that the system (andular mission, analyze and quantify the expected environ- subsystems, e.g., software and hardware) can perform inments, develop design guidance, and establish a margin the predicted environments and conditions as expectedphilosophy against the expected environments. throughout the mission and that the system has the ability to withstand certain numbers and types of faults,The environments envelope should consider what can be errors, or failures (e.g., withstand vibration, predictedencountered during ground test, storage, transportation, data rates, command and/or data errors, single-event44  NASA Systems Engineering Handbook
  • 60. 4.2 Technical Requirements Definitionupsets, and temperature variances to specified limits). the impact from the hazard associated with these accidentsEnvironments can include ground (transportation and to within acceptable levels. (For additional informationhandling), launch, on-orbit (Earth or other), plane- concerning safety, see, for example, NPR 8705.2, Human-tary, reentry, and landing, or they might be for software Rating Requirements for Space Systems, NPR 8715.3, NASAwithin certain modes or states of operation. Reliability General Safety Program Requirements, and NASA-STD-addresses design and verification requirements to meet 8719.13, Software Safety Standard.)the requested level of operation as well as fault and/orfailure tolerance for all expected environments and con- 4.2.2.2 Human Factors Engineeringditions. Reliability requirements cover fault/failure pre- Requirementsvention, detection, isolation, and recovery. In human spaceflight, the human—as operator and as maintainer—is a critical component of the mission andSafety Requirements system design. Human capabilities and limitations mustNASA uses the term “safety” broadly to include human enter into designs in the same way that the properties of(public and workforce), environmental, and asset safety. materials and characteristics of electronic componentsThere are two types of safety requirements—determin- do. Human factors engineering is the discipline thatistic and risk-informed. A deterministic safety require- studies human-system interfaces and interactions andment is the qualitative or quantitative definition of a provides requirements, standards, and guidelines to en-threshold of action or performance that must be met by sure the entire system can function as designed with ef-a mission-related design item, system, or activity for that fective accommodation of the human component.item, system, or activity to be acceptably safe. Examplesof deterministic safety requirements are incorporation of Humans are initially integrated into systems throughsafety devices (e.g., build physical hardware stops into the analysis of the overall mission. Mission functions aresystem to prevent the hydraulic lift/arm from extending allocated to humans as appropriate to the system ar-past allowed safety height and length limits); limits on the chitecture, technical capabilities, cost factors, and crewrange of values a system input variable is allowed to take capabilities. Once functions are allocated, human fac-on; and limit checks on input commands to ensure they tors analysts work with system designers to ensure thatare within specified safety limits or constraints for that human operators and maintainers are provided themode or state of the system (e.g., the command to re- equipment, tools, and interfaces to perform their as-tract the landing gear is only allowed if the airplane is in signed tasks safely and effectively.the airborne state). For those components identified as“safety critical,” requirements include functional redun- NASA-STD-3001, NASA Space Flight Human Systemdancy or failure tolerance to allow the system to meet its Standards Volume 1: Crew Health ensures that systemsrequirements in the presence of one or more failures or are safe and effective for humans. The standards focusto take the system to a safe state with reduced function- on the human integrated with the system, the measuresality (e.g., dual redundant computer processors, safe-state needed (rest, nutrition, medical care, exercise, etc.) tobackup processor); detection and automatic system shut- ensure that the human stays healthy and effective, thedown if specified values (e.g., temperature) exceed pre- workplace environment, and crew-system physical andscribed safety limits; use of only a subset that is approved cognitive interfaces.for safety-critical software of a particular computer lan-guage; caution or warning devices; and safety procedures. 4.2.2.3 Requirements Decomposition,A risk-informed safety requirement is a requirement that Allocation, and Validationhas been established, at least in part, on the basis of the Requirements are decomposed in a hierarchical struc-consideration of safety-related TPMs and their associ- ture starting with the highest level requirements im-ated uncertainty. An example of a risk-informed safety posed by Presidential directives, mission directorates,requirement is the Probability of Loss of Crew (P(LOC)) program, Agency, and customer and other stakeholders.not exceeding a certain value “p” with a certain confi- These high-level requirements are decomposed intodence level. Meeting safety requirements involves iden- functional and performance requirements and allocatedtification and elimination of hazards, reducing the likeli- across the system. These are then further decomposedhood of the accidents associated with hazards, or reducing and allocated among the elements and subsystems. This NASA Systems Engineering Handbook  45
  • 61. 4.0 System Designdecomposition and allocation process continues until a result in an overdesign that is not justified. This hierar-complete set of design-to requirements is achieved. At chical flowdown is illustrated in Figure 4.2-3.each level of decomposition (system, subsystem, compo-nent, etc.), the total set of derived requirements must be Figure 4.2-4 is an example of how science pointing re-validated against the stakeholder expectations or higher quirements are successively decomposed and allocatedlevel parent requirements before proceeding to the next from the top down for a typical science mission. It is im-level of decomposition. portant to understand and document the relationship be- tween requirements. This will reduce the possibility ofThe traceability of requirements to the lowest level en- misinterpretation and the possibility of an unsatisfactorysures that each requirement is necessary to meet the design and associated cost increases.stakeholder expectations. Requirements that are not al-located to lower levels or are not implemented at a lower Throughout Phases A and B, changes in requirements andlevel result in a design that does not meet objectives and constraints will occur. It is imperative that all changes beis, therefore, not valid. Conversely, lower level require- thoroughly evaluated to determine the impacts on bothments that are not traceable to higher level requirements higher and lower hierarchical levels. All changes must be Mission Authority Mission Objectives Mission Programmatics: Requirements � Cost � Schedule � Constraints Customer � Mission Classification Implementing Organizations System Functional Requirements Environmental Institutional and Other Design Constraints Requirements and Guidelines Assumptions System Performance Requirements Subsystem A Subsystem Subsystem Subsystem X Functional and B C Functional and Performance Performance Requirements Requirements Allocated Requirements Derived Requirements ... Allocated Requirements Derived Requirements Figure 4.2‑3 The flowdown of requirements46  NASA Systems Engineering Handbook
  • 62. 4.2 Technical Requirements Definition Science Pointing Requirements Spacecraft Ground Requirements Requirements Attitude Science Axis Determination Knowledge Requirements Requirements Total Gyro to Attitude Instrument Science Axis to Star Tracker Estimation Boresight to Attitude Control Error Error Science Axis System ReferenceGyro to Star Optical Filter Star Gyro Velocity Instrument Instrument Main Tracker Bench Estimation Catalog Bias Rate Aberration Calibration Thermal StructureCalibration Thermal Error Location Drift Error Deformation ThermalUncertainty Deformation Error Deformation Figure 4.2‑4 Allocation and flowdown of science pointing requirementssubjected to a review and approval cycle as part of a formal should not be a requirement or the requirement state-change control process to maintain traceability and to en- ment needs to be rewritten. For example, the requirementsure the impacts of any changes are fully assessed for all to “minimize noise” is vague and cannot be verified. If theparts of the system. A more formal change control pro- requirement is restated as “the noise level of the compo-cess is required if the mission is very large and involves nent X shall remain under Y decibels” then it is clearly ver-more than one Center or crosses other jurisdictional or ifiable. Examples of the types of metadata are provided inorganizational boundaries. Table 4.2-2.4.2.2.4 Capturing Requirements and the The requirements database is an extremely useful tool for Requirements Database capturing the requirements and the associated metadata and for showing the bidirectional traceability between require-At the time the requirements are written, it is important ments. The database evolves over time and could be usedto capture the requirements statements along with themetadata associated with each requirement. The meta- for tracking status information related to requirements suchdata is the supporting information necessary to help as To Be Determined (TBD)/To Be Resolved (TBR) status,clarify and link the requirements. resolution date, and verification status. Each project should decide what metadata will be captured. The database is usu-The method of verification must also be thought through ally in a central location that is made available to the entireand captured for each requirement at the time it is de- project team. (See Appendix D for a sample requirementsveloped. The verification method includes test, inspec- verification matrix.)tion, analysis, and demonstration. Be sure to documentany new or derived requirements that are uncovered 4.2.2.5 Technical Standardsduring determination of the verification method. Anexample is requiring an additional test port to give Importance of Standards Applicationvisibility to an internal signal during integration and Standards provide a proven basis for establishingtest. If a requirement cannot be verified, then either it common technical requirements across a program or NASA Systems Engineering Handbook  47
  • 63. 4.0 System Design Table 4.2‑2 Requirements Metadata Item Function Requirement ID Provides a unique numbering system for sorting and tracking. Rationale Provides additional information to help clarify the intent of the requirements at the time they were written. (See “Rationale” box below on what should be captured.) Traced from Captures the bidirectional traceability between parent requirements and lower level (derived) requirements and the relationships between requirements. Owner Person or group responsible for writing, managing, and/or approving changes to this requirement. Verification method Captures the method of verification (test, inspection, analysis, demonstration) and should be determined as the requirements are developed. Verification lead Person or group assigned responsibility for verifying the requirement. Verification level Specifies the level in the hierarchy at which the requirements will be verified (e.g., system, subsys- tem, element). Rationale The rationale should be kept up to date and include the following information:  Reason for the Requirement: Often the reason for the requirement is not obvious, and it may be lost if not recorded as the requirement is being documented. The reason may point to a constraint or concept of operations. If there is a clear parent requirement or trade study that explains the reason, then reference it.  Document Assumptions: If a requirement was written assuming the completion of a technology development pro- gram or a successful technology mission, document the assumption.  Document Relationships: The relationships with the product’s expected operations (e.g., expectations about how stakeholders will use a product). This may be done with a link to the ConOps.  Document Design Constraints: Imposed by the results from decisions made as the design evolves. If the require- ment states a method of implementation, the rationale should state why the decision was made to limit the solution to this one method of implementation.project to avoid incompatibilities and ensure that at least eral, the order of authority among standards for NASAminimum requirements are met. Common standards programs and projects is as follows:can also lower implementation cost as well as costs for  Standards mandated by law (e.g., environmental stan-inspection, common supplies, etc. Typically, standards dards),(and specifications) are used throughout the product life  National or international voluntary consensus stan-cycle to establish design requirements and margins, ma- dards recognized by industry,terials and process specifications, test methods, and in-  Other Government standards,terface specifications. Standards are used as requirements  NASA policy directives, and(and guidelines) for design, fabrication, verification, val-idation, acceptance, operations, and maintenance.  NASA technical standards. NASA may also designate mandatory or “core” stan-Selection of Standards dards that must be applied to all programs where tech-NASA policy for technical standards is provided in NPD nically applicable. Waivers to designated core standards8070.6, Technical Standards, which addresses selection, must be justified and approved at the Agency level unlesstailoring, application, and control of standards. In gen- otherwise delegated.48  NASA Systems Engineering Handbook
  • 64. 4.3 Logical DecompositionLogical Decomposition is the process for creating the 4.3.1.1 Inputsdetailed functional requirements that enable NASA pro- Typical inputs needed for the Logical Decompositiongrams and projects to meet the stakeholder expectations. Process would include the following:This process identifies the “what” that must be achieved  Technical Requirements: A validated set of require-by the system at each level to enable a successful project. ments that represent a description of the problem toLogical decomposition utilizes functional analysis to be solved, have been established by functional andcreate a system architecture and to decompose top-level performance analysis, and have been approved by the(or parent) requirements and allocate them down to the customer and other stakeholders. Examples of docu-lowest desired levels of the project. mentation that capture the requirements are an SRD, PRD, and IRD.The Logical Decomposition Process is used to:  Technical Measures: An established set of measures Improve understanding of the defined technical re- based on the expectations and requirements that will quirements and the relationships among the require- be tracked and assessed to determine overall system ments (e.g., functional, behavioral, and temporal), or product effectiveness and customer satisfaction. and These measures are MOEs, MOPs, and a special Decompose the parent requirements into a set of log- subset of these called TPMs. See Subsection 6.7.2.2 ical decomposition models and their associated sets for further details. of derived technical requirements for input to the De- sign Solution Definition Process. 4.3.1.2 Process Activities The key first step in the Logical Decomposition Pro-4.3.1 Process Description cess is establishing the system architecture model. TheFigure 4.3-1 provides a typical flow diagram for the Log- system architecture activity defines the underlying struc-ical Decomposition Process and identifies typical inputs, ture and relationships of hardware, software, communi-outputs, and activities to consider in addressing logical cations, operations, etc., that provide for the implemen-decomposition. tation of Agency, mission directorate, program, project, and subsequent levels of the requirements. System archi- To Design Solution De ne one or more logical De nition and Requirements tecture activities drive the From Technical Management and Interface partitioning of system ele-Requirements De nition decomposition models Management Processes ments and requirements to and Con gurationManagement Processes Derived Technical lower level functions and Allocate technical requirements to Requirements requirements to the point Baselined Technical logical decomposition models to form that design work can be ac- Requirements a set of derived technical requirements complished. Interfaces and To Design Solution De nition and Con guration relationships between parti- From Technical Resolve derived technical Management Processes tioned subsystems and ele-Requirements De nition requirement con icts ments are defined as well. and Technical Data Logical DecompositionManagement Processes Models Once the top-level (or Validate the resulting set of derived Measures of technical requirements parent) functional require- Performance To Technical Data ments and constraints have Management Process been established, the system Establish the derived technical Logical Decomposition designer uses functional requirements baseline Work Products analysis to begin to formu- late a conceptual system ar- Figure 4.3‑1 Logical Decomposition Process chitecture. The system ar- NASA Systems Engineering Handbook  49
  • 65. 4.0 System Designchitecture can be seen as the strategic organization of lower levels produces complications in design, cost, orthe functional elements of the system, laid out to enable schedule that necessitate such changes.the roles, relationships, dependencies, and interfaces be-tween elements to be clearly defined and understood. It Aside from the creative minds of the architects, there areis strategic in its focus on the overarching structure of the multiple tools that can be utilized to develop a system’ssystem and how its elements fit together to contribute to architecture. These are primarily modeling and simula-the whole, instead of on the particular workings of the tion tools, functional analysis tools, architecture frame-elements themselves. It enables the elements to be de- works, and trade studies. (For example, one way of doingveloped separately from each other while ensuring that architecture is the Department of Defense (DOD) Ar-they work together effectively to achieve the top-level (or chitecture Framework (DODAF). See box.) As eachparent) requirements. concept is developed, analytical models of the architec- ture, its elements, and their operations will be developedMuch like the other elements of functional decomposi- with increased fidelity as the project evolves. Functionaltion, the development of a good system-level architec- decomposition, requirements development, and tradeture is a creative, recursive, and iterative process that studies are subsequently undertaken. Multiple iterationscombines an excellent understanding of the project’s end of these activities feed back to the evolving architecturalobjectives and constraints with an equally good knowl- concept as the requirements flow down and the designedge of various potential technical means of delivering matures.the end products. Functional analysis is the primary method used inFocusing on the project’s ends, top-level (or parent) re- system architecture development and functional re-quirements, and constraints, the system architect must quirement decomposition. It is the systematic processdevelop at least one, but preferably multiple, concept ar- of identifying, describing, and relating the functions achitectures capable of achieving program objectives. Each system must perform to fulfill its goals and objectives.architecture concept involves specification of the func- Functional analysis identifies and links system functions,tional elements (what the pieces do), their relationships trade studies, interface characteristics, and rationales toto each other (interface definition), and the ConOps, i.e., requirements. It is usually based on the ConOps for thehow the various segments, subsystems, elements, units, system of interest.etc., will operate as a system when distributed by loca-tion and environment from the start of operations to the Three key steps in performing functional analysis are:end of the mission.  Translate top-level requirements into functions that must be performed to accomplish the requirements.The development process for the architectural concepts  Decompose and allocate the functions to lower levelsmust be recursive and iterative, with feedback from of the product breakdown structure.stakeholders and external reviewers, as well as from sub-system designers and operators, provided as often as  Identify and describe functional and subsystem inter-possible to increase the likelihood of achieving the pro- faces.gram’s ends, while reducing the likelihood of cost and The process involves analyzing each system requirementschedule overruns. to identify all of the functions that must be performed to meet the requirement. Each function identified is de-In the early stages of the mission, multiple concepts are scribed in terms of inputs, outputs, and interface require-developed. Cost and schedule constraints will ultimately ments. The process is repeated from the top down so thatlimit how long a program or project can maintain mul- subfunctions are recognized as part of larger functionaltiple architectural concepts. For all NASA programs, ar- areas. Functions are arranged in a logical sequence sochitecture design is completed during the Formulation that any specified operational usage of the system can bephase. For most NASA projects (and tightly coupled pro- traced in an end-to-end path.grams), the selection of a single architecture will happenduring Phase A, and the architecture and ConOps will The process is recursive and iterative and continues untilbe baselined during Phase B. Architectural changes at all desired levels of the architecture/system have beenhigher levels occasionally occur as decomposition to analyzed, defined, and baselined. There will almost cer-50  NASA Systems Engineering Handbook
  • 66. 4.3 Logical Decomposition DOD Architecture Framework New ways, called architecture frameworks, have been developed in the last decade to describe and characterize evolv- ing, complex system-of-systems. In such circumstances, architecture descriptions are very useful in ensuring that stake- holder needs are clearly understood and prioritized, that critical details such as interoperability are addressed upfront, and that major investment decisions are made strategically. In recognition of this, the U.S. Department of Defense has established policies that mandate the use of the DODAF in capital planning, acquisition, and joint capabilities integra- tion. An architecture can be understood as “the structure of components, their relationships, and the principles and guide- lines governing their design and evolution over time.”* To describe an architecture, the DODAF defines several views: operational, systems, and technical standards. In addition, a dictionary and summary information are also required. (See figure below.) Operational View ne Op d C Do Identifies what needs to be an er ap e Do es accomplished and by whom S yst ire tio It o B at ab It g ne io il fo it t t ha Get han qu ma es s t na iti Re for Do eed l R es xc Ac upp em d to n E eq � I h at N ui � B Sup w T ilit � W Wh re n o ch m es he t as po ech ies m � en ic rt n � N Cap In tiv or s T Te ab ica ts ge on d e ab an ati an ch ilit l no y lo Ex r i S gy s Speci c System Capabilities Required to Satisfy Systems View Information Exchanges Technical Standards View Relates systems and characteristics Prescribes standards Technical Standards Criteria to operational needs and conventions Governing Interoperable Implementation/Procurement of the Selected System Capabilities Within each of these views, DODAF contains specific products. For example, within the Operational View is a description of the operational nodes, their connectivity, and information exchange requirements. Within the Systems View is a de- scription of all the systems contained in the operational nodes and their interconnectivity. Not all DODAF products are relevant to NASA systems engineering, but its underlying concepts and formalisms may be useful in structuring com- plex problems for the Technical Requirements Definition and Decision Analysis Processes. *Definition based on Institute of Electrical and Electronics Engineers (IEEE) STD 610.12. Source: DOD, DOD Architecture Framework.tainly be alternative ways to decompose functions; there- tinuing until the system is fully defined, with all of thefore, the outcome is highly dependent on the creativity, requirements understood and known to be viable, verifi-skills, and experience of the engineers doing the analysis. able, and internally consistent. Only at that point shouldAs the analysis proceeds to lower levels of the architec- the system architecture and requirements be baselined.ture and system and the system is better understood, thesystems engineer must keep an open mind and a will- 4.3.1.3 Outputsingness to go back and change previously established ar- Typical outputs of the Logical Decomposition Processchitecture and system requirements. These changes will would include the following:then have to be decomposed down through the architec-  System Architecture Model: Defines the under-ture and systems again, with the recursive process con- lying structure and relationship of the elements of the NASA Systems Engineering Handbook  51
  • 67. 4.0 System Design system (e.g., hardware, software, communications, 4.3.2.2 Functional Analysis Techniques operations, etc.) and the basis for the partitioning of Although there are many techniques available to per- requirements into lower levels to the point that design form functional analysis, some of the more popular are work can be accomplished. (1) Functional Flow Block Diagrams (FFBDs) to depict End Product Requirements: A defined set of make- task sequences and relationships, (2) N2 diagrams (or to, buy-to, code-to, and other requirements from N x N interaction matrix) to identify interactions or in- which design solutions can be accomplished. terfaces between major factors from a systems perspec- tive, and (3) Timeline Analyses (TLAs) to depict the time4.3.2 Logical Decomposition Guidance sequence of time-critical functions.4.3.2.1 Product Breakdown Structure Functional Flow Block DiagramsThe decompositions represented by the PBS and the The primary functional analysis technique is the func-Work Breakdown Structure (WBS) form important per- tional flow block diagram. The purpose of the FFBD is tospectives on the desired product system. The WBS is a indicate the sequential relationship of all functions thathierarchical breakdown of the work necessary to com- must be accomplished by a system. When completed,plete the project. See Subsection 6.1.2.1 for further in- these diagrams show the entire network of actions thatformation on WBS development. The WBS contains the lead to the fulfillment of a function.PBS, which is the hierarchical breakdown of the prod-ucts such as hardware items, software items, and infor- FFBDs specifically depict each functional event (rep-mation items (documents, databases, etc.). The PBS is resented by a block) occurring following the precedingused during the Logical Decomposition and functional function. Some functions may be performed in parallel,analysis processes. The PBS should be carried down to or alternative paths may be taken. The FFBD networkthe lowest level for which there is a cognizant engineer shows the logical sequence of “what” must happen; itor manager. Figure 4.3-2 is an example of a PBS. does not ascribe a time duration to functions or between functions. The duration of the function and the time Flight Segment between functions may vary from a fraction of a second to many weeks. To under- stand time-critical require- ments, a TLA is used. (See Payload Spacecraft Launch Element Bus Accommodations the TLA discussion later in this subsection.) Command Payload Telescope Structure Attached The FFBDs are function & Data Fitting oriented, not equipment Guidance, oriented. In other words, Detectors Power Navigation & Electrical they identify “what” must Control happen and must not as- sume a particular answer Electronics Electrical Propulsion Supply to “how” a function will be performed. The “how” is then defined for each block Thermal Mechanisms at a given level by defining the “what” functions at the next lower level necessary Spacecraft Payload Communi- to accomplish that block. Interface Interface cations In this way, FFBDs are de- Figure 4.3‑2 Example of a PBS veloped from the top down,52  NASA Systems Engineering Handbook
  • 68. 4.3 Logical Decompositionin a series of levels, with tasks at each level identified Each block in the first level of the diagram is expandedthrough functional decomposition of a single task at a to a series of functions, as shown in the second-level dia-higher level. The FFBD displays all of the tasks at each gram for “Perform Mission Operations.” Note that thelevel in their logical, sequential relationship, with their diagram shows both input (“Transfer to OPS Orbit”) andrequired inputs and anticipated outputs (including met- output (“Transfer to STS Orbit”), thus initiating the in-rics, if applicable), plus a clear link back to the single, terface identification and control process. Each block inhigher level task. the second-level diagram can be progressively developedAn example of an FFBD is shown in Figure 4.3-3. The into a series of functions, as shown in the third-level dia-FFBD depicts the entire flight mission of a spacecraft. gram.TOP LEVEL 1.0 2.0 3.0 4.0 6.0 7.0 8.0 Ascent Into Check Out Transfer to Perform Transfer to Retrieve Reenter and Mission OR Orbit Injection and Deploy OPS Orbit STS Orbit Spacecraft Land Operations 5.0 Contingency OperationsSECOND LEVEL (3.0) Ref. 4.1 4.2 4.3 Transfer to Provide Provide Provide OPS Orbit Electric Power Attitude Thermal Stabilization Control 4.4 4.5 4.7 4.8 4.10 (6.0) Ref. Provide Orbit Receive Store/Process Acquire Transmit Pay- Transfer to OR AND load & Sub- OR Main Command Command Payload Data STS Orbit system Data 4.6 4.9 4.11 Receive Com- Acquire Transmit Subsystem OR Subsystem mand (Omni) Status Data DataTHIRD LEVEL (4.7) Ref. (4.10) Ref. Transmit Pay- Store/Process load & Sub- Command system Data 4.8.1 4.8.2 4.8.3 4.8.4 4.8.5 4.8.6 4.8.7 4.8.8 Compute TDRS Slew to Radar to Compute LOS Slew S/C Command Process Re- Radar to Pointing and Track Pointing to LOS ERP PW ceiving Signal OR Standby Standby Vector TDRS Vector Vector Radar On and Format 4.8.9 Radar O Figure 4.3‑3 Example of a functional flow block diagram NASA Systems Engineering Handbook  53
  • 69. 4.0 System DesignFFBDs are used to develop, analyze, and flow down re- the squares in the N x N matrix represent the interfacequirements, as well as to identify profitable trade studies, inputs and outputs. Where a blank appears, there isby identifying alternative approaches to performing each no interface between the respective components orfunction. In certain cases, alternative FFBDs may be functions. The N2 diagram can be taken down intoused to represent various means of satisfying a particular successively lower levels to the component functionalfunction until trade study data are acquired to permit se- levels. In addition to defining the interfaces, the N2lection among the alternatives. diagram also pinpoints areas where conflicts could arise in interfaces, and highlights input and outputThe flow diagram also provides an understanding of dependency assumptions and requirements.the total operation of the system, serves as a basis fordevelopment of operational and contingency proce- Timeline Analysisdures, and pinpoints areas where changes in opera- TLA adds consideration of functional durations and istional procedures could simplify the overall system performed on those areas where time is critical to missionoperation. success, safety, utilization of resources, minimization of downtime, and/or increasing availability. TLA can be ap-N2 Diagrams plied to such diverse operational functions as spacecraftThe N-squared (N2 or N2) diagram is used to develop command sequencing and launch; but for those functionalsystem interfaces. An example of an N2 diagram is sequences where time is not a critical factor, FFBDs or N2shown in Figure 4.3-4. The system components or diagrams are sufficient. The following areas are often cat-functions are placed on the diagonal; the remainder of egorized as time-critical: (1) functions affecting system reaction time, (2) mission Input turnaround time, (3) time Alpha countdown activities, and (4) functions for which op- timum equipment and/or M personnel utilization are de- A pendent on the timing of particular activities. SS E M SS B Timeline Sheets (TLSs) are used to perform and record the analysis of time-critical C M functions and functional sequences. For time-critical E M SS functional sequences, the D E E time requirements are spec- E M SS E M SS ified with associated toler- E ances. Additional tools such as mathematical models and M computer simulations may F be necessary to establish the duration of each timeline. Legend: E Electrical For additional information G M M Mechanical on FFBD, N2 diagrams, E E SS Supplied Services timeline analysis, and other Interface Output functional analysis methods, H Beta A–H: System or Subsystem see Appendix F. Figure 4.3‑4 Example of an N2 diagram54  NASA Systems Engineering Handbook
  • 70. 4.4 Design Solution DefinitionThe Design Solution Definition Process is used to trans- there are additional subsystems of the end product thatlate the high-level requirements derived from the stake- need to be defined.holder expectations and the outputs of the Logical De-composition Process into a design solution. This involves 4.4.1 Process Descriptiontransforming the defined logical decomposition models Figure 4.4-1 provides a typical flow diagram for the De-and their associated sets of derived technical require- sign Solution Definition Process and identifies typicalments into alternative solutions. These alternative solu- inputs, outputs, and activities to consider in addressingtions are then analyzed through detailed trade studies design solution definition.that result in the selection of a preferred alternative. Thispreferred alternative is then fully defined into a final de- 4.4.1.1 Inputssign solution that will satisfy the technical requirements.This design solution definition will be used to generate There are several fundamental inputs needed to initiatethe end product specifications that will be used produce the Design Solution Definition Process:the product and to conduct product verification. This  Technical Requirements: The customer and stake-process may be further refined depending on whether holder needs that have been translated into a reason To Requirements Management and Interface Management Processes System-Speci ed De ne alternative design solutions Requirements End Product–Speci ed Analyze each alternative design solution Requirements From Logical To Stakeholder Expectations De nition Decomposition and Select best design solution alternative and Requirements Management and InterfaceCon guration Management Management Processes Processes Initial Subsystem Baselined Logical Generate full design description of the Speci cations Decomposition selected solution Models To Stakeholder Expectations De nition or Product Implementation and Verify the fully de ned design solution Requirements Management and Interface Management Processes Baselined Derived Technical Enabling Product Baseline design solution speci ed requirements Requirements Requirements and design descriptions To Product Veri cation Process Product Veri cation Yes Enabling Need No Plan lower level * product product? * exists? To Product Validation Process Product Validation No Yes Plan Initiate development Initiate development To Technical Data Management Process of next lower level of enabling products products Logistics and Operate- To Procedures * To Product Implementation Process Figure 4.4‑1 Design Solution Definition Process NASA Systems Engineering Handbook  55
  • 71. 4.0 System Design ably complete set of validated requirements for the holder expectations where a strawman architecture/ system, including all interface requirements. design, the associated ConOps, and the derived require- Logical Decomposition Models: Requirements de- ments are developed. These three products must be con- composed by one or more different methods (e.g., sistent with each other and will require iterations and de- function, time, behavior, data flow, states, modes, sign decisions to achieve this consistency. This recursive system architecture, etc.). and iterative design loop is illustrated in Figure 4.0-1. Each create concepts step also involves an assessment of4.4.1.2 Process Activities potential capabilities offered by the continually changingDefine Alternative Design Solutions state of technology and potential pitfalls captured throughThe realization of a system over its life cycle involves experience-based review of prior program/project les-a succession of decisions among alternative courses of sons learned data. It is imperative that there be a con-action. If the alternatives are precisely defined and thor- tinual interaction between the technology developmentoughly understood to be well differentiated in the cost- process and the design process to ensure that the designeffectiveness space, then the systems engineer can make reflects the realities of the available technology and thatchoices among them with confidence. overreliance on immature technology is avoided. Addi- tionally, the state of any technology that is consideredTo obtain assessments that are crisp enough to facili- enabling must be properly monitored, and care must betate good decisions, it is often necessary to delve more taken when assessing the impact of this technology ondeeply into the space of possible designs than has yet the concept performance. This interaction is facilitatedbeen done, as is illustrated in Figure 4.4-2. It should be through a periodic assessment of the design with respectrealized, however, that this illustration represents neither to the maturity of the technology required to implementthe project life cycle, which encompasses the system de- the design. (See Subsection 4.4.2.1 for a more detailedvelopment process from inception through disposal, nor discussion of technology assessment.) These technologythe product development process by which the system elements usually exist at a lower level in the PBS. Al-design is developed and implemented. though the process of design concept development byEach create concepts step in Figure 4.4-2 involves a recur- the integration of lower level elements is a part of the sys-sive and iterative design loop driven by the set of stake- tems engineering process, there is always a danger that the top-down process cannot keep up with the bottom- up process. Therefore, system architecture issues need to Recognize be resolved early so that the system can be modeled with Identify and need/ quantify goals sufficient realism to do reliable trade studies. opportunity Identify and quantify goals As the system is realized, its particulars become clearer— but also harder to change. The purpose of systems engi- Identify and quantify goals neering is to make sure that the Design Solution Defi- conc Creat ts nition Process happens in a way that leads to the most conc Creat ts reso ease Identify and n ep cost-effective final system. The basic idea is that before lutio e conc Creat ts quantify goals ease n ep Incr lutio e those decisions that are hard to undo are made, the al- reso ease n conc Creat ts lutio ep Incr e reso Incr ternatives should be carefully assessed, particularly with ep e e Sele rad o t ies respect to the maturity of the required technology. s desi ct D stud on e isi gn rad d ec Sele o t dies en t desi ct D u st e Create Alternative Design Concepts le m gn rad p o t dies D u Once it is understood what the system is to accomplish, m Sele st desi ct de it is possible to devise a variety of ways that those goals I gn tra s Sele Do udie can be met. Sometimes, that comes about as a conse- stPerform desi ctmission gn quence of considering alternative functional allocations and integrating available subsystem design options, all ofFigure 4.4‑2 The doctrine of successive refinement which can have technologies at varying degrees of matu-56  NASA Systems Engineering Handbook
  • 72. 4.4 Design Solution Definitionrity. Ideally, as wide a range of plausible alternatives as is cycle, this means focusing on system architectures; inconsistent with the design organization’s charter should later phases, emphasis is given to system designs.be defined, keeping in mind the current stage in the pro-  Evaluate these alternatives in terms of the MOEscess of successive refinement. When the bottom-up pro- and system cost. Mathematical models are useful incess is operating, a problem for the systems engineer is this step not only for forcing recognition of the rela-that the designers tend to become fond of the designs tionships among the outcome variables, but also forthey create, so they lose their objectivity; the systems en- helping to determine what the measures of perfor-gineer often must stay an “outsider” so that there is more mance must be quantitatively.objectivity. This is particularly true in the assessment of  Rank the alternatives according to appropriate selec-the technological maturity of the subsystems and com- tion criteria.ponents required for implementation. There is a ten-dency on the part of technology developers and project  Drop less promising alternatives and proceed to themanagement to overestimate the maturity and applica- next level of resolution, if needed.bility of a technology that is required to implement a de- The trade study process must be done openly and in-sign. This is especially true of “heritage” equipment. The clusively. While quantitative techniques and rules areresult is that critical aspects of systems engineering are used, subjectivity also plays a significant role. To makeoften overlooked. the process work effectively, participants must have openOn the first turn of the successive refinement in minds, and individuals with different skills—systems en-Figure 4.4-2, the subject is often general approaches or gineers, design engineers, specialty engineers, programstrategies, sometimes architectural concepts. On the next, analysts, decision scientists, and project managers—it is likely to be functional design, then detailed design, must cooperate. The right quantitative methods and se-and so on. The reason for avoiding a premature focus on lection criteria must be used. Trade study assumptions,a single design is to permit discovery of the truly best de- models, and results must be documented as part of thesign. Part of the systems engineer’s job is to ensure that project archives. The participants must remain focusedthe design concepts to be compared take into account all on the functional requirements, including those for en-interface requirements. “Did you include the cabling?” abling products. For an in-depth discussion of the tradeis a characteristic question. When possible, each design study process, see Section 6.8. The ability to performconcept should be described in terms of controllable de- these studies is enhanced by the development of systemsign parameters so that each represents as wide a class models that relate the design parameters to those assess-of designs as is reasonable. In doing so, the systems engi- ments—but it does not depend upon them.neer should keep in mind that the potentials for change The technical team must consider a broad range of con-may include organizational structure, schedules, proce- cepts when developing the system model. The modeldures, and any of the other things that make up a system. must define the roles of crew, hardware, and software inWhen possible, constraints should also be described by the system. It must identify the critical technologies re-parameters. quired to implement the mission and must consider the entire life cycle, from fabrication to disposal. EvaluationAnalyze Each Alternative Design Solution criteria for selecting concepts must be established. CostThe technical team analyzes how well each of the design is always a limiting factor. However, other criteria, suchalternatives meets the system goals (technology gaps, ef- as time to develop and certify a unit, risk, and reliability,fectiveness, cost, schedule, and risk, both quantified and also are critical. This stage cannot be accomplishedotherwise). This assessment is accomplished through without addressing the roles of operators and main-the use of trade studies. The purpose of the trade study tainers. These contribute significantly to life-cycle costsprocess is to ensure that the system architecture and de- and to the system reliability. Reliability analysis shouldsign decisions move toward the best solution that can be be performed based upon estimates of componentachieved with the available resources. The basic steps in failure rates for hardware. If probabilistic risk assessmentthat process are: models are applied, it may be necessary to include occur- Devise some alternative means to meet the functional rence rates or probabilities for software faults or human requirements. In the early phases of the project life- error events. Assessments of the maturity of the required NASA Systems Engineering Handbook  57
  • 73. 4.0 System Designtechnology must be done and a technology developmentplan developed. Expressed in Quantitative UnitsControlled modification and development of design con- Some Aspect of E ectiveness,cepts, together with such system models, often permitsthe use of formal optimization techniques to find regionsof the design space that warrant further investigation. AWhether system models are used or not, the designconcepts are developed, modified, reassessed, and com-pared against competing alternatives in a closed-loop Cprocess that seeks the best choices for further develop- Bment. System and subsystem sizes are often determinedduring the trade studies. The end result is the determina-tion of bounds on the relative cost-effectiveness of the Life-Cycle Cost, Expressed in Constant Dollarsdesign alternatives, measured in terms of the quantified Figure 4.4‑3 A quantitative objective function,system goals. (Only bounds, rather than final values, are dependent on life‑cycle cost and all aspects ofpossible because determination of the final details of the effectivenessdesign is intentionally deferred.) Increasing detail asso- Note: The different shaded areas indicate different levels ofciated with the continually improving resolution reduces uncertainty. Dashed lines represent constant values of objectivethe spread between upper and lower bounds as the pro- function (cost-effectiveness). Higher values of cost-effectivenesscess proceeds. are achieved by moving toward upper left. A, B, and C are design concepts with different risk patterns.Select the Best Design Solution AlternativeThe technical team selects the best design solution from achievement of the goals can be quantitatively expressedamong the alternative design concepts, taking into ac- by such an objective function, designs can be comparedcount subjective factors that the team was unable to in terms of their value. Risks associated with design con-quantify as well as estimates of how well the alterna- cepts can cause these evaluations to be somewhat nebu-tives meet the quantitative requirements; the maturity lous (because they are uncertain and are best describedof the available technology; and any effectiveness, cost, by probability distributions).schedule, risk, or other constraints. In Figure 4.4-3, the risks are relatively high for designThe Decision Analysis Process, as described in Sec- concept A. There is little risk in either effectiveness ortion 6.8, should be used to make an evaluation of the al- cost for concept B, while the risk of an expensive failureternative design concepts and to recommend the “best” is high for concept C, as is shown by the cloud of prob-design solution. ability near the x axis with a high cost and essentially no effectiveness. Schedule factors may affect the effective-When it is possible, it is usually well worth the trouble ness and cost values and the risk distributions.to develop a mathematical expression, called an “objec-tive function,” that expresses the values of combinations The mission success criteria for systems differ signifi-of possible outcomes as a single measure of cost-effec- cantly. In some cases, effectiveness goals may be muchtiveness, as illustrated in Figure 4.4-3, even if both cost more important than all others. Other projects may de-and effectiveness must be described by more than one mand low costs, have an immutable schedule, or requiremeasure. minimization of some kinds of risks. Rarely (if ever) is it possible to produce a combined quantitative measureThe objective function (or “cost function”) assigns a real that relates all of the important factors, even if it is ex-number to candidate solutions or “feasible solutions” in pressed as a vector with several components. Even whenthe alternative space or “search space.” A feasible solu- that can be done, it is essential that the underlying fac-tion that minimizes (or maximizes, if that is the goal) the tors and relationships be thoroughly revealed to and un-objective function is called an “optimal solution.” When derstood by the systems engineer. The systems engineer58  NASA Systems Engineering Handbook
  • 74. 4.4 Design Solution Definitionmust weigh the importance of the unquantifiable factors uration control in an attempt to ensure that any subse-along with the quantitative data. quent changes are indeed justified and affordable.Technical reviews of the data and analyses, including At this point in the systems engineering process, there istechnology maturity assessments, are an important a logical branch point. For those issues for which the pro-part of the decision support packages prepared for the cess of successive refinement has proceeded far enough,technical team. The decisions that are made are gener- the next step is to implement the decisions at that levelally entered into the configuration management system of resolution. For those issues that are still insufficientlyas changes to (or elaborations of) the system baseline. resolved, the next step is to refine the development fur-The supporting trade studies are archived for future use. ther.An essential feature of the systems engineering processis that trade studies are performed before decisions are Fully Describe the Design Solutionmade. They can then be baselined with much more con- Once the preferred design alternative has been selectedfidence. and the proper level of refinement has been completed, then the design is fully defined into a final design solu-Increase the Resolution of the Design tion that will satisfy the technical requirements. The de-The successive refinement process of Figure 4.4-2 illus- sign solution definition will be used to generate the endtrates a continuing refinement of the system design. At product specifications that will be used to produce theeach level of decomposition, the baselined derived (and product and to conduct product verification. This pro-allocated) requirements become the set of high-level re- cess may be further refined depending on whether therequirements for the decomposed elements, and the pro- are additional subsystems of the end product that needcess begins again. One might ask, “When do we stop re- to be defined.fining the design?” The answer is that the design effortprecedes to a depth that is sufficient to meet several The scope and content of the full design descriptionneeds: the design must penetrate sufficiently to allow an- must be appropriate for the product life-cycle phase, thealytical validation of the design to the requirements; it phase success criteria, and the product position in themust also have sufficient depth to support cost modeling PBS (system structure). Depending on these factors, theand to convince a review team of a feasible design with form of the design solution definition could be simply aperformance, cost, and risk margins. simulation model or a paper study report. The technical data package evolves from phase to phase, starting withThe systems engineering engine is applied again and conceptual sketches or models and ending with completeagain as the system is developed. As the system is real- drawings, parts list, and other details needed for productized, the issues addressed evolve and the particulars of implementation or product integration. Typical outputthe activity change. Most of the major system decisions definitions from the Design Solution Definition Process(goals, architecture, acceptable life-cycle cost, etc.) are are shown in Figure 4.4-1 and are described in Subsec-made during the early phases of the project, so the suc- tion 4.4.1.3.cessive refinements do not correspond precisely to thephases of the system life cycle. Much of the system archi- Verify the Design Solutiontecture can be seen even at the outset, so the successiverefinements do not correspond exactly to development Once an acceptable design solution has been selectedof the architectural hierarchy, either. Rather, they corre- from among the various alternative designs and docu-spond to the successively greater resolution by which the mented in a technical data package, the design solutionsystem is defined. must next be verified against the system requirements and constraints. A method to achieve this verificationIt is reasonable to expect the system to be defined with is by means of a peer review to evaluate the resultingbetter resolution as time passes. This tendency is formal- design solution definition. Guidelines for conducting aized at some point (in Phase B) by defining a baseline peer review are discussed in Section 6.7.system definition. Usually, the goals, objectives, and con-straints are baselined as the requirements portion of the In addition, peer reviews play a significant role as a de-baseline. The entire baseline is then subjected to config- tailed technical component of higher level technical and NASA Systems Engineering Handbook  59
  • 75. 4.0 System Designprogrammatic reviews. For example, the peer review of tations will be required, and the process is started overa component battery design can go into much more tech- again. This process continues until the system—architec-nical detail on the battery than the integrated power sub- ture, ConOps, and requirements—meets the stakeholdersystem review. Peer reviews can cover the components of expectations.a subsystem down to the level appropriate for verifyingthe design against the requirements. Concerns raised at This design solution validation is in contrast to the vali-the peer review might have implications on the power dation of the end product described in the end productsubsystem design and verification and therefore must validation plan, which is part of the technical databe reported at the next higher level review of the power package. That validation occurs in a later life-cycle phasesubsystem. and is a result of the Product Validation Process (see Sec- tion 5.4) applied to the realization of the design solutionThe verification must show that the design solution defi- as an end product.nition: Is realizable within the constraints imposed on the Identify Enabling Products technical effort; Enabling products are the life-cycle support products Has specified requirements that are stated in accept- and services (e.g., production, test, deployment, training, able statements and have bidirectional traceability maintenance, and disposal) that facilitate the progression with the derived technical requirements, technical re- and use of the operational end product through its life quirements, and stakeholder expectations; and cycle. Since the end product and its enabling products Has decisions and assumptions made in forming the are interdependent, they are viewed as a system. Project solution consistent with its set of derived technical responsibility thus extends to responsibility for acquiring requirements, separately allocated technical require- services from the relevant enabling products in each life- ments, and identified system product and service cycle phase. When a suitable enabling product does not constraints. already exist, the project that is responsible for the end product also can be responsible for creating and usingThis design solution verification is in contrast to the the enabling product.verification of the end product described in the endproduct verification plan which is part of the technical Therefore, an important activity in the Design Solutiondata package. That verification occurs in a later life-cycle Definition Process is the identification of the enablingphase and is a result of the Product Verification Process products that will be required during the life cycle of the(see Section 5.3) applied to the realization of the design selected design solution and then initiating the acquisi-solution as an end product. tion or development of those enabling products. Need dates for the enabling products must be realisticallyValidate the Design Solution identified on the project schedules, incorporating ap-The validation of the design solution is a recursive and propriate schedule slack. Then firm commitments in theiterative process as shown in Figure 4.0-1. Each alterna- form of contracts, agreements, and/or operational planstive design concept is validated against the set of stake- must be put in place to ensure that the enabling productsholder expectations. The stakeholder expectations drive will be available when needed to support the product-the iterative design loop in which a strawman architec- line life-cycle phase activities. The enabling product re-ture/design, the ConOps, and the derived requirements quirements are documented as part of the technical dataare developed. These three products must be consistent package for the Design Solution Definition Process.with each other and will require iterations and design An environmental test chamber would be an example ofdecisions to achieve this consistency. Once consistency an enabling product whose use would be acquired at anis achieved, functional analyses allow the study team appropriate time during the test phase of a space flightto validate the design against the stakeholder expecta- system.tions. A simplified validation asks the questions: Doesthe system work? Is the system safe and reliable? Is the Special test fixtures or special mechanical handling de-system affordable? If the answer to any of these questions vices would be examples of enabling products thatis no, then changes to the design or stakeholder expec- would have to be created by the project. Because of long60  NASA Systems Engineering Handbook
  • 76. 4.4 Design Solution Definitiondevelopment times as well as oversubscribed facilities, it Outputs of the Design Solution Definition Process in-is important to identify enabling products and secure the clude the following:commitments for them as early in the design phase as  The System Specification: The system specificationpossible. contains the functional baseline for the system that is the result of the Design Solution Definition Process.Baseline the Design Solution The system design specification provides sufficientAs shown earlier in Figure 4.0-1, once the selected system guidance, constraints, and system requirements fordesign solution meets the stakeholder expectations, the the design engineers to execute the design.study team baselines the products and prepares for the  The System External Interface Specifications: Thenext life-cycle phase. Because of the recursive nature of system external interface specifications describe thesuccessive refinement, intermediate levels of decomposi- functional baseline for the behavior and character-tion are often validated and baselined as part of the pro- istics of all physical interfaces that the system hascess. In the next level of decomposition, the baselined with the external world. These include all structural,requirements become the set of high-level requirements thermal, electrical, and signal interfaces, as well as thefor the decomposed elements, and the process begins human-system interfaces.again.  The End-Product Specifications: The end-productBaselining a particular design solution enables the tech- specifications contain the detailed build-to and code-nical team to focus on one design out of all the alterna- to requirements for the end product. They are de-tive design concepts. This is a critical point in the design tailed, exact statements of design particulars, suchprocess. It puts a stake in the ground and gets everyone as statements prescribing materials, dimensions, andon the design team focused on the same concept. When quality of work to build, install, or manufacture thedealing with complex systems, it is difficult for team end product.members to design their portion of the system if the  The End-Product Interface Specifications: Thesystem design is a moving target. The baselined design end-product interface specifications contain theis documented and placed under configuration control. detailed build-to and code-to requirements forThis includes the system requirements, specifications, the behavior and characteristics of all logical andand configuration descriptions. physical interfaces that the end product has with external elements, including the human-system in-While baselining a design is beneficial to the design pro- terfaces.cess, there is a danger if it is exercised too early in the De-  Initial Subsystem Specifications: The end-productsign Solution Definition Process. The early exploration subsystem initial specifications provide detailed in-of alternative designs should be free and open to a wide formation on subsystems if they are required.range of ideas, concepts, and implementations. Base-lining too early takes the inventive nature out of the con-  Enabling Product Requirements: The requirementscept exploration. Therefore baselining should be one of for associated supporting enabling products providethe last steps in the Design Solution Definition Process. details of all enabling products. Enabling products are the life-cycle support products and services that fa-4.4.1.3 Outputs cilitate the progression and use of the operational end product through its life cycle. They are viewed as partOutputs of the Design Solution Definition Process are of the system since the end product and its enablingthe specifications and plans that are passed on to the products are interdependent.product realization processes. They contain the design-to, build-to, and code-to documentation that complies  Product Verification Plan: The end-product verifica-with the approved baseline for the system. tion plan provides the content and depth of detail nec- essary to provide full visibility of all verification activ-As mentioned earlier, the scope and content of the full ities for the end product. Depending on the scope ofdesign description must be appropriate for the product- the end product, the plan encompasses qualification,line life-cycle phase, the phase success criteria, and the acceptance, prelaunch, operational, and disposal veri-product position in the PBS. fication activities for flight hardware and software. NASA Systems Engineering Handbook  61
  • 77. 4.0 System Design Product Validation Plan: The end-product valida- poration of any “new” technology necessary to meet re- tion plan provides the content and depth of detail quirements. However, a frequently overlooked area is necessary to provide full visibility of all activities to that associated with the modification of “heritage” sys- validate the realized product against the baselined tems incorporated into different architectures and oper- stakeholder expectations. The plan identifies the type ating in different environments from the ones for which of validation, the validation procedures, and the vali- they were designed. If the required modifications and/ dation environment that are appropriate to confirm or operating environments fall outside the realm of expe- that the realized end product conforms to stakeholder rience, then these too should be considered technology expectations. development. Logistics and Operate-to Procedures: The applicable To understand whether or not technology development logistics and operate-to procedures for the system de- is required—and to subsequently quantify the associated scribe such things as handling, transportation, main- cost, schedule, and risk—it is necessary to systematically tenance, long-term storage, and operational consider- assess the maturity of each system, subsystem, or com- ations for the particular design solution. ponent in terms of the architecture and operational en- vironment. It is then necessary to assess what is required in4.4.2 Design Solution Definition Guidance the way of development to advance the maturity to a point4.4.2.1 Technology Assessment where it can successfully be incorporated within cost, schedule, and performance constraints. A process for ac-As mentioned in the process description (Subsec- complishing this assessment is described in Appendix G.tion 4.4.1), the creation of alternative design solutions in- Because technology development has the potential forvolves assessment of potential capabilities offered by the such significant impacts on a program/project, technologycontinually changing state of technology. A continual in- assessment needs to play a role throughout the design andteraction between the technology development process development process from concept development throughand the design process ensures that the design reflects Preliminary Design Review (PDR). Lessons learned fromthe realities of the available technology. This interaction a technology development point of view should then beis facilitated through periodic assessment of the design captured in the final phase of the program.with respect to the maturity of the technology requiredto implement the design. 4.4.2.2 Integrating Engineering SpecialtiesAfter identifying the technology gaps existing in a given into the Systems Engineering Processdesign concept, it will frequently be necessary to under- As part of the technical effort, specialty engineers intake technology development in order to ascertain via- cooperation with systems engineering and subsystembility. Given that resources will always be limited, it will designers often perform tasks that are common acrossbe necessary to pursue only the most promising technol- disciplines. Foremost, they apply specialized analyticalogies that are required to enable a given concept. techniques to create information needed by the project manager and systems engineer. They also help defineIf requirements are defined without fully understanding and write system requirements in their areas of expertise,the resources required to accomplish needed technology and they review data packages, Engineering Change Re-developments then the program/project is at risk. Tech- quests (ECRs), test results, and documentation for majornology assessment must be done iteratively until require- project reviews. The project manager and/or systems en-ments and available resources are aligned within an ac- gineer needs to ensure that the information and prod-ceptable risk posture. Technology development plays a ucts so generated add value to the project commensuratefar greater role in the life cycle of a program/project than with their cost. The specialty engineering technical efforthas been traditionally considered, and it is the role of the should be well integrated into the project. The roles andsystems engineer to develop an understanding of the ex- responsibilities of the specialty engineering disciplinestent of program/project impacts—maximizing benefits should be summarized in the SEMP.and minimizing adverse effects. Traditionally, from aprogram/project perspective, technology development The specialty engineering disciplines included in thishas been associated with the development and incor- handbook are safety and reliability, Quality Assurance62  NASA Systems Engineering Handbook
  • 78. 4.4 Design Solution Definition(QA), ILS, maintainability, producibility, and human for each specific project. The concept is to choose an ef-factors. An overview of these specialty engineering dis- fective set of reliability and maintainability engineeringciplines is provided to give systems engineers a brief in- activities to ensure that the systems designed, built, andtroduction. It is not intended to be a handbook for any of deployed will operate successfully for the required mis-these discipline specialties. sion life cycle.Safety and Reliability In the early phases of a project, risk and reliability anal- yses help designers understand the interrelationships of Overview and Purpose requirements, constraints, and resources, and uncoverA reliable system ensures mission success by functioning key relationships and drivers so they can be properly con-properly over its intended life. It has a low and acceptable sidered. The analyst must help designers go beyond theprobability of failure, achieved through simplicity, proper requirements to understand implicit dependencies thatdesign, and proper application of reliable parts and mate- emerge as the design concept matures. It is unrealistic torials. In addition to long life, a reliable system is robust and assume that design requirements will correctly capturefault tolerant, meaning it can tolerate failures and varia- all risk and reliability issues and “force” a reliable design.tions in its operating parameters and environments. The systems engineer should develop a system strategy mapped to the PBS on how to allocate and coordinate Safety and Reliability in the System Design reliability, fault tolerance, and recovery between systems Process both horizontally and vertically within the architectureA focus on safety and reliability throughout the mission to meet the total mission requirements. System impactslife cycle is essential for ensuring mission success. The of designs must play a key role in the design. Makingfidelity to which safety and reliability are designed and designers aware of impacts of their decisions on overallbuilt into the system depends on the information needed mission reliability is key.and the type of mission. For human-rated systems, safetyand reliability is the primary objective throughout the As the design matures, preliminary reliability analysisdesign process. For science missions, safety and reli- occurs using established techniques. The design andability should be commensurate with the funding and concept of operations should be thoroughly examinedlevel of risk a program or project is willing to accept. Re- for accident initiators and hazards that could lead togardless of the type of mission, safety and reliability con- mishaps. Conservative estimates of likelihood and con-siderations must be an intricate part of the system design sequences of the hazards can be used as a basis for ap-processes. plying design resources to reduce the risk of failures. The team should also ensure that the goals can be met andTo realize the maximum benefit from reliability analysis, failure modes are considered and take into account theit is essential to integrate the risk and reliability analysts entire system.within the design teams. The importance of this cannotbe overstated. In many cases, the reliability and risk ana- During the latter phases of a project, the team uses risklysts perform the analysis on the design after it has been assessments and reliability techniques to verify that theformulated. In this case, safety and reliability features are design is meeting its risk and reliability goals and to helpadded on or outsourced rather than designed in. This develop mitigation strategies when the goals are not metresults in unrealistic analysis that is not focused on risk or discrepancies/failures occur.drivers and does not provide value to the design. Analysis Techniques and MethodsRisk and reliability analyses evolve to answer key ques- This subsection provides a brief summary of the types oftions about design trades as the design matures. Reli- analysis techniques and methods.ability analyses utilize information about the system,identify sources of risk and risk drivers, and provide  Event sequence diagrams/event trees are models thatan important input for decisionmaking. NASA-STD- describe the sequence of events and responses to off-8729.1, Planning, Developing, and Maintaining an Ef- nominal conditions that can occur during a mission.fective Reliability and Maintainability (R&M) Program  Failure Modes and Effects Analyses (FMEAs) areoutlines engineering activities that should be tailored bottom-up analyses that identify the types of failures NASA Systems Engineering Handbook  63
  • 79. 4.0 System Design that can occur within a system and identify the causes, and historical data, and any stated probabilities should effects, and mitigating strategies that can be employed include some measure of the uncertainty surrounding to control the effects of the failures. that estimate. Qualitative top-down logic models identify how fail- Uncertainty expresses the degree of belief analysts have ures within a system can combine to cause an unde- in their estimates. Uncertainty decreases as the quality of sired event. data and understanding of the system improve. The ini- Quantitative logic models (probabilistic risk assess- tial estimates of failure rates or failure probability might ment) extend the qualitative models to include the be based on comparison to similar equipment, historical likelihood of failure. These models involve developing data (heritage), failure rate data from handbooks, or ex- failure criteria based on system physics and system pert elicitation. success criteria, and employing statistical techniques to estimate the likelihood of failure along with uncer- In summary, tainty.  Reliability estimates express probability of success. Reliability block diagrams are diagrams of the ele-  Uncertainty should be included with reliability esti- ments to evaluate the reliability of a system to provide mates. a function.  Reliability estimates combined with FMEAs provide Preliminary Hazard Analysis (PHA) is performed additional and valuable information to aid in the de- early based on the functions performed during the cisionmaking process. mission. Preliminary hazard analysis is a “what if ” process that considers the potential hazard, initiating Quality Assurance event scenarios, effects, and potential corrective mea- Even with the best designs, hardware fabrication and sures and controls. The objective is to determine if the testing are subject to human error. The systems engineer hazard can be eliminated, and if not, how it can be needs to have some confidence that the system actually controlled. produced and delivered is in accordance with its func- Hazard analysis evaluates the completed design. tional, performance, and design requirements. QA pro- Hazard analysis is a “what if ” process that considers vides an independent assessment to the project manager/ the potential hazard, initiating event, effects, and po- systems engineer of the items produced and processes tential corrective measures and controls. The objec- used during the project life cycle. The project manager/ tive is to determine if the hazard can be eliminated, systems engineer must work with the quality assurance and if not, how it can be controlled. engineer to develop a quality assurance program (the ex- Human reliability analysis is a method to understand tent, responsibility, and timing of QA activities) tailored how human failures can lead to system failure and es- to the project it supports. timate the likelihood of those failures. QA is the mainstay of quality as practiced at NASA. Probabilistic structural analysis provides a way to NPD 8730.5, NASA Quality Assurance Program Policy combine uncertainties in materials and loads to eval- states that NASA’s policy is “to comply with prescribed uate the failure of a structural element. requirements for performance of work and to provide Sparing/logistics models provide a means to estimate for independent assurance of compliance through imple- the interactions of systems in time. These models in- mentation of a quality assurance program.” The quality clude ground-processing simulations and mission function of Safety and Mission Assurance (SMA) en- campaign simulations. sures that both contractors and other NASA functions do what they say they will do and say what they intend to Limitations on Reliability Analysis do. This ensures that end product and program quality,The engineering design team must understand that reli- reliability, and overall risk are at the level planned.ability is expressed as the probability of mission success.Probability is a mathematical measure expressing the The Systems Engineer’s Relationship to QAlikelihood of occurrence of a specific event. Therefore, As with reliability, producibility, and other characteris-probability estimates should be based on engineering tics, quality must be designed as an integral part of any64  NASA Systems Engineering Handbook
  • 80. 4.4 Design Solution Definitionsystem. It is important that the systems engineer under- adhere to either ISO 9001 (noncritical work) or AS9100stands SMA’s safeguarding role in the broad context of (critical work) requirements for management of qualitytotal risk and supports the quality role explicitly and vig- systems. Training in these systems is mandatory for mostorously. All of this is easier if the SMA quality function is NASA functions, so knowledge of their applicability byactively included and if quality is designed in with buy- the systems engineer is assumed. Their texts and intentin by all roles, starting at concept development. This will are strongly reflected in NASA’s quality procedural doc-help mitigate conflicts between design and quality re- uments.quirements, which can take on the effect of “tolerancestacking.” Integrated Logistics SupportQuality is a vital part of risk management. Errors, vari- The objective of ILS activities within the systems engi-ability, omissions, and other problems cost time, pro- neering process is to ensure that the product system isgram resources, taxpayer dollars, and even lives. It is in- supported during development (Phase D) and opera-cumbent on the systems engineer to know how quality tions (Phase E) in a cost-effective manner. ILS is particu-affects their projects and to encourage best practices to larly important to projects that are reusable or service-achieve the quality level. able. Projects whose primary product does not evolve over its operations phase typically only apply ILS toRigid adherence to procedural requirements is necessary parts of the project (for example, the ground system) orin high-risk, low-volume manufacturing. In the absence to some of the elements (for example, transportation).of large samples and long production runs, compliance ILS is primarily accomplished by early, concurrent con-to these written procedures is a strong step toward en- sideration of supportability characteristics; performingsuring process, and, thereby, product consistency. To ad- trade studies on alternative system and ILS concepts;dress this, NASA requires QA programs to be designed quantifying resource requirements for each ILS elementto mitigate risks associated with noncompliance to those using best practices; and acquiring the support items as-requirements. sociated with each ILS element. During operations, ILSThere will be a large number of requirements and pro- activities support the system while seeking improve-cedures thus created. These must be flowed down to the ments in cost-effectiveness by conducting analyses in re-supply chain, even to lowest tier suppliers. For circum- sponse to actual operational conditions. These analysesstances where noncompliance can result in loss of life continually reshape the ILS system and its resource re-or loss of mission, there is a requirement to insert into quirements. Neglecting ILS or poor ILS decisions in-procedures Government Mandatory Inspection Points variably have adverse effects on the life-cycle cost of the(GMIPs) to ensure 100 percent compliance with safety/ resultant system. Table 4.4-1 summarizes the ILS disci-mission-critical attributes. Safety/mission-critical attri- plines.butes include hardware characteristics, manufacturing ILS planning should begin early in the project life cycleprocess requirements, operating conditions, and func- and should be documented. This plan should address thetional performance criteria that, if not met, can result elements above including how they will be considered,in loss of life or loss of mission. There will be in place conducted, and integrated into the systems engineeringa Program/Project Quality Assurance Surveillance Plan process needs.(PQASP) as mandated by Federal Acquisition Regula-tion (FAR) Subpart 46.4. Preparation and content forPQASPs are outlined in NPR 8735.2, Management of MaintainabilityGovernment Quality Assurance Functions for NASA Con- Maintainability is defined as the measure of the abilitytracts. This document covers quality assurance require- of an item to be retained in or restored to specified con-ments for both low-risk and high-risk acquisitions and ditions when maintenance is performed by personnelincludes functions such as document review, product having specified skill levels, using prescribed proceduresexamination, process witnessing, quality system evalu- and resources, at each prescribed level of maintenance. Itation, nonconformance reporting and corrective action, is the inherent characteristics of a design or installationplanning for quality assurance and surveillance, and that contribute to the ease, economy, safety, and accuracyGMIPs. In addition, most NASA projects are required to with which maintenance actions can be performed. NASA Systems Engineering Handbook  65
  • 81. 4.0 System Design Table 4.4‑1 ILS Technical Disciplines Technical Discipline Definition Maintenance support Ongoing and iterative planning, organization, and management activities necessary to ensure planning that the logistics requirements for any given program are properly coordinated and implemented Design interface The interaction and relationship of logistics with the systems engineering process to ensure that supportability influences the definition and design of the system so as to reduce life-cycle cost Technical data and The recorded scientific, engineering, technical, and cost information used to define, produce, test, technical publications evaluate, modify, deliver, support, and operate the system Training and training Encompasses all personnel, equipment, facilities, data/documentation, and associated resources support necessary for the training of operational and maintenance personnel Supply support Actions required to provide all the necessary material to ensure the system’s supportability and usability objectives are met Test and support All tools, condition-monitoring equipment, diagnostic and checkout equipment, special test equipment equipment, metrology and calibration equipment, maintenance fixtures and stands, and special handling equipment required to support operational maintenance functions Packaging, handling, All materials, equipment, special provisions, containers (reusable and disposable), and supplies storage, and trans- necessary to support the packaging, safety and preservation, storage, handling, and transporta- portation tion of the prime mission-related elements of the system, including personnel, spare and repair parts, test and support equipment, technical data computer resources, and mobile facilities Personnel Involves identification and acquisition of personnel with skills and grades required to operate and maintain a system over its lifetime Logistics facilities All special facilities that are unique and are required to support logistics activities, including stor- age buildings and warehouses and maintenance facilities at all levels Computer resources All computers, associated software, connecting components, networks, and interfaces necessary support to support the day-to-day flow of information for all logistics functionsSource: Blanchard, System Engineering Management. Role of the Maintainability Engineer ical to a system’s cost-effectiveness, as experience withMaintainability engineering is another major specialty the shuttle’s thermal tiles has shown. Factors that influ-discipline that contributes to the goal of a supportable ence the producibility of a design include the choice ofsystem. This is primarily accomplished in the systems materials, simplicity of design, flexibility in productionengineering process through an active role in imple- alternatives, tight tolerance requirements, and claritymenting specific design features to facilitate safe and and simplicity of the technical data package.effective maintenance actions in the predicted physicalenvironments, and through a central role in developing Role of the Production Engineerthe ILS system. Example tasks of the maintainability en- The production engineer supports the systems engineer-gineer include: developing and maintaining a system ing process (as a part of the multidisciplinary productmaintenance concept, establishing and allocating main- development team) by taking an active role in imple-tainability requirements, performing analysis to quantify menting specific design features to enhance producibilitythe system’s maintenance resource requirements, and and by performing the production engineering analysesverifying the system’s maintainability requirements. needed by the project. These tasks and analyses include:Producibility  Performing the manufacturing/fabrication portion of the system risk management program. This is ac-Producibility is a system characteristic associated with complished by conducting a rigorous production riskthe ease and economy with which a completed designcan be transformed (i.e., fabricated, manufactured, or assessment and by planning effective risk mitigationcoded) into a hardware and/or software realization. actions.While major NASA systems tend to be produced in small  Identifying system design features that enhance pro-quantities, a particular producibility feature can be crit- ducibility. Efforts usually focus on design simplifica-66  NASA Systems Engineering Handbook
  • 82. 4.4 Design Solution Definition tion, fabrication tolerances, and avoidance of haz- Human Factors Engineering ardous materials. Overview and Purpose Conducting producibility trade studies to determine Consideration of human operators and maintainers of the most cost-effective fabrication/manufacturing systems is a critical part of the design process. Human process. factors engineering is the discipline that studies the Assessing production feasibility within project con- human-system interfaces and provides requirements, straints. This may include assessing contractor and standards, and guidelines to ensure the human compo- principal subcontractor production experience and nent of the integrated system is able to function as in- capability, new fabrication technology, special tooling, tended. Human roles include operators (flight crews and production personnel training requirements. and ground crews), designers, manufacturers, ground Identifying long-lead items and critical materials. support, maintainers, and passengers. Flight crew Estimating production costs as a part of life-cycle cost functions include system operation, troubleshooting, management. and in-flight maintenance. Ground crew functions in- clude spacecraft and ground system manufacturing, as- Supporting technology readiness assessments. sembly, test, checkout, logistics, ground maintenance, Developing production schedules. repair, refurbishment, launch control, and mission con- Developing approaches and plans to validate fabrica- trol. tion/manufacturing processes. Human factors are generally considered in four catego-The results of these tasks and production engineering ries. The first is anthropometry and biomechanics—analyses are documented in the manufacturing plan the physical size, shape, and strength of the humans.with a level of detail appropriate to the phase of the The second is sensation and perception—primarilyproject. The production engineer also participates in and vision and hearing, but senses such as touch are alsocontributes to major project reviews (primarily PDR and important. The environment is a third factor—am-Critical Design Review (CDR)) on the above items, and bient noise and lighting, vibration, temperature andto special interim reviews such as the PRR. humidity, atmospheric composition, and contami- nants. Psychological factors comprise memory; in- formation processing components such as pattern Prototypes recognition, decisionmaking, and signal detection; and affective factors—e.g., emotions, cultural pat- Experience has shown that prototype systems can be terns, and habits. effective in enabling efficient producibility even when building only a single flight system. Prototypes are built early in the life cycle and they are made as close Human Factors Engineering in the System to the flight item in form, fit, and function as is feasi- Design Process ble at that stage of the development. The prototype  Stakeholder Expectations: The operators, main- is used to “wring out” the design solution so that ex- tainers, and passengers are all stakeholders in the perience gained from the prototype can be fed back system. The human factors specialist identifies roles into design changes that will improve the manufac- and responsibilities that can be performed by hu- ture, integration, and maintainability of a single flight mans and scenarios that exceed human capabilities. item or the production run of several flight items. Un- The human factors specialist ensures that system op- fortunately, prototypes are often deleted from proj- erational concept development includes task anal- ects to save cost. Along with that decision, the proj- ysis and human/system function allocation. As these ect accepts an increased risk in the development are refined, function allocation distributes operator phase of the life cycle. Fortunately, advancements in roles and responsibilities for subtasks to the crew, ex- computer-aided design and manufacturing have miti- ternal support teams, and automation. (For example, gated that risk somewhat by enabling the designer to visualize the design and “walk through” the integra- in aviation, tasks may be allocated to crew, air traffic tion sequence to uncover problems before they be- controllers, or autopilots. In spacecraft, tasks may be come a costly reality. performed by crew, mission control, or onboard sys- tems.) NASA Systems Engineering Handbook  67
  • 83. 4.0 System Design Requirements Definition: Human factors require- Human Factors Engineering Analyses ments for spacecraft and space habitats are program/ Techniques and Methods project dependent, derived from NASA-STD-3001, Example methods used to provide human performance NASA Space Flight Human System Standard Volume 1: data, predict human-system performance, and evaluate Crew Health. Other human factors requirements of human-system designs include: other missions and Earth-based activities for human  Task Analysis: Produces a detailed description of the space flight missions are derived from human fac- things a person must do in a system to accomplish a tors standards such as MIL-STD-1472, Human En- task, with emphasis on requirements for information gineering; NUREG-0700, Human-System Interface presentation, decisions to be made, task times, oper- Design Review Guidelines; and the Federal Aviation ator actions, and environmental conditions. Administration’s Human Factors Design Standard.  Timeline Analysis: Follows from task analysis. Dura- Technical Solution: Consider the human as a central tions of tasks are identified in task analyses, and the component when doing logical decomposition and times at which these tasks occur are plotted in graphs, developing design concepts. The users—operators or which also show the task sequences. The purpose is to maintainers—will not see the entire system as the de- identify requirements for simultaneous incompatible signer does, only as the system interfaces with them. activities and activities that take longer than is avail- In engineering design reviews, human factors spe- able. Timelines for a given task can describe the activ- cialists promote the usability of the design solution. ities of multiple operators or crewmembers. With early involvement, human factors assessments  Modeling and Simulation: Models or mockups to may catch usability problems at very early stages. make predictions about system performance, com- For example, in one International Space Station pay- pare configurations, evaluate procedures, and eval- load design project, a human factors assessment of a uate alternatives. Simulations can be as simple as very early block diagram of the layout of stowage and positioning a graphical human model with realistic hardware identified problems that would have made anthropometric dimensions with a graphical model operations very difficult. Changes were made to the of an operator station, or they can be complex sto- conceptual design at negligible cost—i.e., rearranging chastic models capturing decision points, error op- conceptual block diagrams based on the sequence in portunities, etc. which users would access items.  Usability Testing: Based on a task analysis and pre- Usability Evaluations of Design Concepts: Evalua- liminary design, realistic tasks are carried out in a con- tions can be performed easily using rapid prototyping trolled environment with monitoring and recording tools for hardware and software interfaces, standard equipment. Objective measures such as performance human factors engineering data-gathering and anal- time and number of errors are evaluated; subjective ysis tools, and metrics such as task completion time ratings are collected. The outputs systematically re- and number of errors. Systematically collected sub- port on strengths and weaknesses of candidate design jective reports from operators also provide useful solutions. data. New technologies provide detailed objective in-  Workload Assessment: Measurement on a standard- formation—e.g., eye tracking for display and control ized scale such as the NASA-TLX or the Cooper- layout assessment. Human factors specialists provide Harper rating scales of the amount and type of work. assessment capabilities throughout the iterative de- It assesses operator and crew task loading, which de- sign process. termines the ability of a human to perform the required Verification: As mentioned, verification of require- tasks in the desired time with the desired accuracy. ments for usability, error rates, task completion times,  Human Error and Human Reliability Assessment: and workload is challenging. Methods range from tests Top-down (fault tree analyses) and bottom-up (human with trained personnel in mockups and simulators, to factors process failure modes and effects analysis) models of human performance, to inspection by ex- analyses. The goal is to promote human reliability by perts. As members of the systems engineering team, creating a system that can tolerate and recover from human factors specialists provide verification guidance human errors. Such a system must also support the from the time requirements are first developed. human role in adding reliability to the system.68  NASA Systems Engineering Handbook
  • 84. 4.4 Design Solution Definition Roles of the Human Factors Specialist  Support trade studies by providing data on effects ofThe human factors specialist supports the systems engi- alternative designs on time to complete tasks, work-neering process by representing the users’ and maintain- load, and error rates.ers’ requirements and capabilities throughout the design,  Support trade studies by providing data on effects ofproduction, and operations stages. Human factors spe- alternative designs on skills and training required tocialists’ roles include: operate the system. Identify applicable requirements based on Agency  Support design reviews to ensure compliance with standards for human-system integration during the human-systems integration requirements. requirements definition phase. Support development of mission concepts by pro-  Conduct evaluations using mockups and pro- viding information on human performance capabili- totypes to provide detailed data on user perfor- ties and limitations. mance. Support task analysis and function allocation with in-  Support development of training and maintenance formation on human capabilities and limitations. procedures in conjunction with hardware designers Identify system design features that enhance usability. and mission planners. This integrates knowledge of human performance ca-  Collect data on human-system integration issues pabilities and design features. during operations to inform future designs. NASA Systems Engineering Handbook  69
  • 85. 5.0 Product RealizationThis chapter describes the activities in the product re- ration management, and technical assessments to make,alization processes listed in Figure 2.1-1. The chapter is buy, or reuse subsystems. Once these subsystems are re-separated into sections corresponding to steps 5 through alized, they must be integrated to the appropriate level9 listed in Figure 2.1-1. The processes within each step as designated by the appropriate interface requirements.are discussed in terms of the inputs, the activities, and These products are then verified through the Technicalthe outputs. Additional guidance is provided using ex- Assessment Process to ensure they are consistent withamples that are relevant to NASA projects. the technical data package and that “the product was built right.” Once consistency is achieved, the technicalThe product realization side of the SE engine is where team will validate the products against the stakeholderthe rubber meets the road. In this portion of the en- expectations that “the right product was built.” Upongine, five interdependent processes result in systems that successful completion of validation, the products aremeet the design specifications and stakeholder expecta- transitioned to the next level of the system. Figure 5.0-1tions. These products are produced, acquired, reused, or illustrates these processes.coded; integrated into higher level assemblies; verifiedagainst design specifications; validated against stake- This is an iterative and recursive process. Early in the lifeholder expectations; and transitioned to the next level of cycle, paper products, models, and simulations are runthe system. As has been mentioned in previous sections, through the five realization processes. As the system ma-products can be models and simulations, paper studies tures and progresses through the life cycle, hardware andor proposals, or hardware and software. The type and software products are run through these processes. It islevel of product depends on the phase of the life cycle important to catch errors and failures at the lowest leveland the product’s specific objectives. But whatever the of integration and early in the life cycle so that changesproduct, all must effectively use the processes to ensure can be made through the design processes with min-the system meets the intended operational concept. imum impact to the project.This effort starts with the technical team taking the output The next sections describe each of the five product re-from the system design processes and using the appro- alization processes and their associated products for apriate crosscutting functions, such as data and configu- given NASA mission. PRODUCT DESIGN REALIZATION EVALUATION PROCESSES TRANSITION PROCESS Product Product Product Product Product Implementation Integration Verification Validation Transition � Acquire � Assembly � Functional � Operational � Delivery to Next � Make/Code � Functional � Environmental Testing in Mission Higher Level in PBS � Reuse Evaluation � Operational Test- Environment � Delivery to ing in Integration Operational & Test Environment System Figure 5.0‑1 Product realization NASA Systems Engineering Handbook  71
  • 86. 5.0 Product Realization Product Realization Keys  Generate and manage requirements for off-the-shelf hardware/software products as for all other products.  Understand the differences between verification testing and validation testing. ▶ Verification Testing: Verification testing relates back to the approved requirements set (such as a System Require- ments Document (SRD)) and can be performed at different stages in the product life cycle. Verification testing in- cludes: (1) any testing used to assist in the development and maturation of products, product elements, or manu- facturing or support processes; and/or (2) any engineering-type test used to verify status of technical progress, to verify that design risks are minimized, to substantiate achievement of contract technical performance, and to cer- tify readiness for initial validation testing. Verification tests use instrumentation and measurements, and are gener- ally accomplished by engineers, technicians, or operator-maintainer test personnel in a controlled environment to facilitate failure analysis. ▶ Validation Testing: Validation relates back to the ConOps document. Validation testing is conducted under realis- tic conditions (or simulated conditions) on any end product for the purpose of determining the effectiveness and suitability of the product for use in mission operations by typical users; and the evaluation of the results of such tests. Testing is the detailed quantifying method of both verification and validation. However, testing is required to validate final end products to be produced and deployed.  Consider all customer, stakeholder, technical, programmatic, and safety requirements when evaluating the input nec- essary to achieve a successful product transition.  Analyze for any potential incompatibilities with interfaces as early as possible.  Completely understand and analyze all test data for trends and anomalies.  Understand the limitations of the testing and any assumptions that are made.  Ensure that a reused product meets the verification and validation required for the relevant system in which it is to be used, as opposed to relying on the original verification and validation it met for the system of its original use. It would then be required to meet the same verification and validation as a purchased product or a built product. The “pedi- gree” of a reused product in its original application should not be relied upon in a different system, subsystem, or ap- plication.72  NASA Systems Engineering Handbook
  • 87. 5.1 Product ImplementationProduct implementation is the first process encountered products of this project will have been made early in thein the SE engine that begins the movement from the life cycle using the Decision Analysis Process.bottom of the product hierarchy up towards the ProductTransition Process. This is where the plans, designs, anal- 5.1.1 Process Descriptionysis, requirements development, and drawings are real- Figure 5.1-1 provides a typical flow diagram for theized into actual products. Product Implementation Process and identifies typicalProduct implementation is used to generate a speci- inputs, outputs, and activities to consider in addressingfied product of a project or activity through buying, product implementation.making/coding, or reusing previously developed hard-ware, software, models, or studies to generate a product 5.1.1.1 Inputsappropriate for the phase of the life cycle. The product Inputs to the Product Implementation activity dependmust satisfy the design solution and its specified require- primarily on the decision as to whether the end prod-ments. uct will be purchased, developed from scratch, or if the product will be formed by reusing part or all of productsThe Product Implementation Process is the key activity from other projects. Typical inputs are shown in Fig-that moves the project from plans and designs into real- ure 5.1-1.ized products. Depending on the project and life-cyclephase within the project, the product may be hardware,  Inputs if Purchasing the End Product: If the deci-software, a model, simulations, mockups, study reports, sion was made to purchase part or all of the productsor other tangible results. These products may be realized for this project, the end product design specificationsthrough their purchase from commercial or other ven- are obtained from the configuration managementdors, generated from scratch, or through partial or com- system as well as other applicable documents such asplete reuse of products from other projects or activities. the SEMP.The decision as to which of these realization strategies,  Inputs if Making/Coding the End Product: For endor which combination of strategies, will be used for the products that will be made/coded by the technical From existing resources or external sources Prepare to conduct implementation To Product Verification Process Required Raw Materials Desired End If implemented by If implemented by making: If implemented by Product buying: Evaluate readiness of reuse: From Configuration Participate in purchase product implementation– Participate in acquiringManagement Process of speci ed end product enabling products the reuse end product End Product Design To Technical Data Speci cations and Management Process Con guration Documentation Make the speci ed end product End Product Documents and Manuals From existing Prepare appropriateresources or Product product support documentation Product Transition Process Implementation Product Work Products Implementation– Capture product implementation Enabling Products work products Figure 5.1‑1 Product Implementation Process NASA Systems Engineering Handbook  73
  • 88. 5.0 Product Realization team, the inputs will be the configuration controlled veloped. If the product is to be bought as a pure Com- design specifications and raw materials as provided to mercial-Off-the-Shelf (COTS) item, the specifications or purchased by the project. will need to be checked to make sure they adequately Inputs Needed if Reusing an End Product: For end describe the vendor characteristics to narrow to a single products that will reuse part or all of products gener- make/model of their product line. ated by other projects, the inputs may be the docu- Finally, the availability and skills of personnel needed to mentation associated with the product, as well as the conduct the implementation as well as the availability of product itself. Care must be taken to ensure that these any necessary raw materials, enabling products, or spe- products will indeed meet the specifications and en- cial services should also be reviewed. Any special training vironments for this project. These would have been necessary for the personnel to perform their tasks needs factors involved in the Decision Analysis Process to to be performed by this time. determine the make/buy/reuse decision. Purchase, Make, or Reuse the Product5.1.1.2 Process Activities Purchase the ProductImplementing the product can take one of three forms: In the first case, the end product is to be purchased from Purchase/buy, a commercial or other vendor. Design/purchase speci- Make/code, or fications will have been generated during requirements Reuse. development and provided as inputs. The technical teamThese three forms will be discussed in the following sub- will need to review these specifications and ensure theysections. Figure 5.1-1 shows what kind of inputs, outputs, are in a form adequate for the contract or purchase order.and activities are performed during product implemen- This may include the generation of contracts, Statements of Work (SOWs), requests for proposals, purchase or-tation regardless of where in the product hierarchy or ders, or other purchasing mechanisms. The responsi-life cycle it is. These activities include preparing to con- bilities of the Government and contractor team shouldduct the implementation, purchasing/making/reusing have been documented in the SEMP. This will define,the product, and capturing the product implementation for example, whether NASA expects the vendor to pro-work product. In some cases, implementing a product vide a fully verified and validated product or whether themay have aspects of more than one of these forms (such NASA technical team will be performing those duties.as a build-to-print). In those cases, the appropriate as- The team will need to work with the acquisition teampects of the applicable forms are used. to ensure the accuracy of the contract SOW or purchase order and to ensure that adequate documentation, cer-Prepare to Conduct Implementation tificates of compliance, or other specific needs are re-Preparing to conduct the product implementation is a quested of the vendor.key first step regardless of what form of implementationhas been selected. For complex projects, implementation For contracted purchases, as proposals come back fromstrategy and detailed planning or procedures need to be the vendors, the technical team should work with thedeveloped and documented. For less complex projects, contracting officer and participate in the review of thethe implementation strategy and planning will need to technical information and in the selection of the vendorbe discussed, approved, and documented as appropriate that best meets the design requirements for acceptablefor the complexity of the project. cost and schedule.The documentation, specifications, and other inputs will As the purchased products arrive, the technical teamalso need to be reviewed to ensure they are ready and at should assist in the inspection of the delivered productan appropriate level of detail to adequately complete the and its accompanying documentation. The team shouldtype of implementation form being employed and for ensure that the requested product was indeed the onethe product life-cycle phase. For example, if the “make” delivered, and that all necessary documentation, suchimplementation form is being employed, the design as source code, operator manuals, certificates of com-specifications will need to be reviewed to ensure they are pliance, safety information, or drawings have been re-at a design-to level that will allow the product to be de- ceived.74  NASA Systems Engineering Handbook
  • 89. 5.1 Product ImplementationThe technical team should also ensure that any enabling environment in which it will be used. This should haveproducts necessary to provide test, operations, main- been a factor used in the decision strategy to make/buy/tenance, and disposal support for the product also are reuse.ready or provided as defined in the contract. The documentation available from the reuse productDepending on the strategy and roles/responsibilities of should be reviewed by the technical team to becomethe vendor, as documented in the SEMP, a determina- completely familiar with the product and to ensure ittion/analysis of the vendor’s verification and validation will meet the requirements in the intended environment.compliance may need to be reviewed. This may be done Any supporting manuals, drawings, or other documen-informally or formally as appropriate for the complexity tation available should also be gathered.of the product. For products that were verified and vali-dated by the vendor, after ensuring that all work prod- The availability of any supporting or enabling productsucts from this phase have been captured, the product or infrastructure needed to complete the fabrication,may be ready to enter the Product Transition Process to coding, testing, analysis, verification, validation, or ship-be delivered to the next higher level or to its final end ping of the product needs to be determined. If any ofuser. For products that will be verified and validated by these products or services are lacking, they will need tothe technical team, the product will be ready to be veri- be developed or arranged for before progressing to thefied after ensuring that all work products for this phase next phase.have been captured. Special arrangements may need to be made or forms such as nondisclosure agreements may need to be ac- Make/Code the Product quired before the reuse product can be received.If the strategy is to make or code the product, the tech-nical team should first ensure that the enabling prod- A reused product will frequently have to undergo theucts are ready. This may include ensuring all piece parts same verification and validation as a purchased productare available, drawings are complete and adequate, soft- or a built product. Relying on prior verification and vali-ware design is complete and reviewed, machines to cut dation should only be considered if the product’s verifi-the material are available, interface specifications are ap- cation and validation documentation meets the verifica-proved, operators are trained and available, procedures/ tion, validation, and documentation requirements of theprocesses are ready, software personnel are trained and current project and the documentation demonstratesavailable to generate code, test fixtures are developed and that the product was verified and validated against equiv-ready to hold products while being generated, and soft- alent requirements and expectations. The savings gainedware test cases are available and ready to begin model from reuse is not necessarily from reduced testing, butgeneration. in a lower likelihood that the item will fail tests and gen- erate rework.The product is then made or coded in accordance withthe specified requirements, configuration documenta- Capture Work Productstion, and applicable standards. Throughout this process,the technical team should work with the quality organi- Regardless of what implementation form was selected,zation to review, inspect, and discuss progress and status all work products from the make/buy/reuse processwithin the team and with higher levels of management as should be captured, including design drawings, designappropriate. Progress should be documented within the documentation, code listings, model descriptions, pro-technical schedules. Peer reviews, audits, unit testing, cedures used, operator manuals, maintenance manuals,code inspections, simulation checkout, and other tech- or other documentation as appropriate.niques may be used to ensure the made or coded productis ready for the verification process. 5.1.1.3 Outputs  End Product for Verification: Unless the vendor Reuse performs verification, the made/coded, purchased,If the strategy is to reuse a product that already exists, or reused end product, in a form appropriate for thecare must be taken to ensure that the product is truly ap- life-cycle phase, is provided for the verification pro-plicable to this project and for the intended uses and the cess. The form of the end product is a function of the NASA Systems Engineering Handbook  75
  • 90. 5.0 Product Realization life-cycle phase and the placement within the system fied to bring it into compliance, or whether another op- structure (the form of the end product could be hard- tion to build or reuse should be selected. ware, software, model, prototype, first article for test, or single operational article or multiple production Several additional factors should be considered when se- article). lecting the OTS option: End Product Documents and Manuals: Appropriate  Heritage of the product; documentation is also delivered with the end product  Critical or noncritical application; to the verification process and to the technical data  Amount of modification required and who performs management process. Documentation may include it; applicable design drawings; operation, user, mainte-  Whether sufficient documentation is available; nance, or training manuals; applicable baseline docu-  Proprietary, usage, ownership, warranty, and licensing ments (configuration baseline, specifications, stake- rights; holder expectations); certificates of compliance; or other vendor documentation.  Future support for the product from the vendor/pro- vider;The process is complete when the following activities  Any additional validation of the product needed byhave been accomplished: the project; and End product is fabricated, purchased, or reuse mod-  Agreement on disclosure of defects discovered by the ules acquired. community of users of the product. End products are reviewed, checked, and ready for verification. 5.1.2.2 Heritage Procedures, decisions, assumptions, anomalies, cor- “Heritage” refers to the original manufacturer’s level of rective actions, lessons learned, etc., resulting from quality and reliability that is built into parts and which the make/buy/reuse are recorded. has been proven by (1) time in service, (2) number of units in service, (3) mean time between failure perfor-5.1.2 Product Implementation Guidance mance, and (4) number of use cycles. High-heritage products are from the original supplier, who has main-5.1.2.1 Buying Off-the-Shelf Products tained the great majority of the original service, design,Off-the-Shelf (OTS) products are hardware/software performance, and manufacturing characteristics. Low-that has an existing heritage and usually originates from heritage products are those that (1) were not built byone of several sources, which include commercial, mili- the original manufacturer; (2) do not have a significanttary, and NASA programs. Special care needs to be taken history of test and usage; or (3) have had significant as-when purchasing OTS products for use in the space en- pects of the original service, design, performance, orvironment. Most OTS products were developed for use manufacturing characteristics altered. An importantin the more benign environments of Earth and may not factor in assessing the heritage of a COTS product isbe suitable to endure the harsh space environments, in- to ensure that the use/application of the product is rel-cluding vacuum, radiation, extreme temperature ranges, evant to the application for which it is now intended. Aextreme lighting conditions, zero gravity, atomic oxygen, product that has high heritage in a ground-based appli-lack of convection cooling, launch vibration or accelera- cation could have a low heritage when placed in a spacetion, and shock loads. environment.When purchasing OTS products, requirements should The focus of a “heritage review” is to confirm the appli-still be generated and managed. A survey of available cability of the component for the current application.OTS is made and evaluated as to the extent they satisfy Assessments must be made regarding not only technicalthe requirements. Products that meet all the require- interfaces (hardware and software) and performance,ments are a good candidate for selection. If no product but also the environments to which the unit has beencan be found to meet all the requirements, a trade study previously qualified, including electromagnetic compat-needs to be performed to determine whether the require- ibility, radiation, and contamination. The compatibilityments can be relaxed or waived, the OTS can be modi- of the design with parts quality requirements must also76  NASA Systems Engineering Handbook
  • 91. 5.1 Product Implementationbe assessed. All noncompliances must be identified, doc- Modification of an OTS product may be required for itumented, and addressed either by modification to bring to be suitable for a NASA application. This affects thethe component into compliance or formal waivers/de- product’s heritage, and therefore, the modified productviations for accepted deficiencies. This heritage review is should be treated as a new design. If the product is mod-commonly held closely after contract award. ified by NASA and not the manufacturer, it would be beneficial for the supplier to have some involvement inWhen reviewing a product’s applicability, it is impor- reviewing the modification. NASA modification maytant to consider the nature of the application. A “cata- also require the purchase of additional documentationstrophic” application is one where a failure could cause from the supplier such as drawings, code, or other de-loss of life or vehicle. A “critical” application is one where sign and test descriptions.failure could cause loss of mission. For use in these appli-cations, several additional precautions should be taken, For additional information and suggested test and anal-including ensuring the product will not be used near the ysis requirements for OTS products, see JSC EA-WI-016boundaries of its performance or environmental enve- or MSFC MWI 8060.1 both titled Off the Shelf Hardwarelopes. Extra scrutiny by experts should be applied during Utilization in Flight Hardware Development and G-118-Preliminary Design Reviews (PDRs) and Critical Design 2006e AIAA Guide for Managing the Use of CommercialReviews (CDRs) to ensure the appropriateness of its Off the Shelf (COTS) Software Components for Mission-use. Critical Systems. NASA Systems Engineering Handbook  77
  • 92. 5.0 Product Realization5.2 Product IntegrationProduct Integration is one of the SE engine product re- manner, will pass product verification and validation.alization processes that make up the system structure. For some products, the last integration phase will occurIn this process, lower level products are assembled into when the product is deployed at its intended operationalhigher level products and checked to make sure that the site. If any problems of incompatibility are discoveredintegrated product functions properly. It is an element during the product verification and validation testingof the processes that lead realized products from a level phase, they are resolved one at a time.below to realized end products at a level above, betweenthe Product Implementation, Verification, and Valida- The Product Integration Process applies not only to hard-tion Processes. ware and software systems but also to service-oriented so- lutions, requirements, specifications, plans, and concepts.The purpose of the Product Integration Process is to The ultimate purpose of product integration is to ensuresystematically assemble the higher level product from that the system elements will function as a whole.the lower level products or subsystems (e.g., productelements, units, components, subsystems, or operator 5.2.1 Process Descriptiontasks); ensure that the product, as integrated, functionsproperly; and deliver the product. Product integration Figure 5.2-1 provides a typical flow diagram for theis required at each level of the system hierarchy. The Product Integration Process and identifies typical in-activities associated with product integrations occur puts, outputs, and activities to consider in addressingthroughout the entire product life cycle. This includes product integration. The activities of the Product Inte-all of the incremental steps, including level-appropriate gration Process are truncated to indicate the action andtesting, necessary to complete assembly of a product object of the action.and to enable the top-levelproduct tests to be con-ducted. The Product Inte- Prepare to conduct product integrationgration Process may includeand often begins with anal- From Productysis and simulations (e.g., Transition Process Obtain lower level products forvarious types of prototypes) Lower Level assembly and integration To Productand progresses through in- Products to Be Veri cation Process Integratedcreasingly more realistic Con rm that received products Desired Productincremental functionality have been validated From Con gurationuntil the final product is Management Processachieved. In each succes- To Technical Data End Product Design Prepare the integration environmentsive build, prototypes are Speci cations and for assembly and integration Management Processconstructed, evaluated, im- Con guration Documentation Product Documentsproved, and reconstructed and Manuals Assemble and integrate thebased upon knowledge received products into the desiredgained in the evaluation From existing end productprocess. The degree of vir- resources or Product Product Integration Transition Processtual versus physical proto- Work Productstyping required depends Product Integration– Prepare appropriate product Enabling Products support documentationon the functionality of thedesign tools and the com-plexity of the product and Capture product integration workits associated risk. There is productsa high probability that theproduct, integrated in this Figure 5.2‑1 Product Integration Process78  NASA Systems Engineering Handbook
  • 93. 5.2 Product Integration5.2.1.1 Inputs The project would follow this approach throughout itsProduct Integration encompasses more than a one-time life cycle.assembly of the lower level products and operator tasks The following are typical activities that support the Prod-at the end of the design and fabrication phase of the life uct Integration Process:cycle. An integration plan must be developed and docu-  Prepare to conduct Product Integration by (1) preparingmented. An example outline for an integration plan is a product integration strategy, detailed planning for theprovided in Appendix H. Product Integration is con- integration, and integration sequences and proceduresducted incrementally, using a recursive process of assem- and (2) determining whether the product configura-bling lower level products and operator tasks; evaluating tion documentation is adequate to conduct the type ofthem through test, inspection, analysis, or demonstra- product integration applicable for the product-line life-tion; and then assembling more lower level products and cycle phase, location of the product in the system struc-operator tasks. Planning for Product Integration should ture, and management phase success criteria.be initiated during the concept formulation phase of thelife cycle. The basic tasks that need to be established in-  Obtain lower level products required to assemble andvolve the management of internal and external interfaces integrate into the desired product.of the various levels of products and operator tasks to  Confirm that the received products that are to be as-support product integration and are as follows: sembled and integrated have been validated to dem- Define interfaces; onstrate that the individual products satisfy the Identify the characteristics of the interfaces (physical, agreed-to set of stakeholder expectations, including electrical, mechanical, etc.); interface requirements. Ensure interface compatibility at all defined interfaces  Prepare the integration environment in which as- by using a process documented and approved by the sembly and integration will take place, including eval- project; uating the readiness of the product integration–en- abling products and the assigned workforce. Ensure interface compatibility at all defined interfaces;  Assemble and integrate the received products into the Strictly control all of the interface processes during desired end product in accordance with the specified design, construction, operation, etc.; requirements, configuration documentation, inter- Identify lower level products to be assembled and in- face requirements, applicable standards, and integra- tegrated (from the Product Transition Process); tion sequencing and procedures. Identify assembly drawings or other documentation  Conduct functional testing to ensure that assembly is that show the complete configuration of the product ready to enter verification testing and ready to be in- being integrated, a parts list, and any assembly in- tegrated into the next level. structions (e.g., torque requirements for fasteners);  Prepare appropriate product support documentation Identify end-product, design-definition-specified re- such as special procedures for performing product quirements (specifications), and configuration docu- verification and product validation. mentation for the applicable work breakdown struc- ture model, including interface specifications, in the  Capture work products and related information gen- form appropriate to satisfy the product-line life-cycle erated while performing the product integration pro- phase success criteria (from the Configuration Man- cess activities. agement Process); and 5.2.1.3 Outputs Identify Product Integration–enabling products (from existing resources or the Product Transition Process The following are typical outputs from this process and for enabling product realization). destinations for the products from this process:  Integrated product(s) in the form appropriate to the5.2.1.2 Process Activities product-line life-cycle phase and to satisfy phase suc-This subsection addresses the approach to the top-level cess criteria (to the Product Verification Process).implementation of the Product Integration Process, in-  Documentation and manuals in a form appropriatecluding the activities required to support the process, for satisfying the life-cycle phase success criteria, in- NASA Systems Engineering Handbook  79
  • 94. 5.0 Product Realization cluding as-integrated product descriptions and op- the correct product to meet the requirements. Product erate-to and maintenance manuals (to the Technical Integration can be thought of as released or phased de- Data Management Process). liveries. Product Integration is the process that pulls to- Work products, including reports, records, and non- gether new and existing products and ensures that they deliverable outcomes of product integration activi- all combine properly into a complete product without ties (to support the Technical Data Management Pro- interference or complications. If there are issues, the cess); integration strategy document; assembly/check Product Integration Process documents the exceptions, area drawings; system/component documentation se- which can then be evaluated to determine if the product quences and rationale for selected assemblies; interface is ready for implementation/operations. management documentation; personnel requirements; Integration occurs at every stage of a project’s life cycle. special handling requirements; system documenta- In the Formulation phase, the decomposed requirements tion; shipping schedules; test equipment and drivers’ need to be integrated into a complete system to verify that requirements; emulator requirements; and identifica- nothing is missing or duplicated. In the Implementation tion of limitations for both hardware and software. phase, the design and hardware need to be integrated into an overall system to verify that they meet the require-5.2.2 Product Integration Guidance ments and that there are no duplications or omissions.5.2.2.1 Integration Strategy The emphasis on the recursive, iterative, and integratedAn integration strategy is developed, as well as supporting nature of systems engineering highlights how the productdocumentation, to identify optimal sequence of receipt, integration activities are not only integrated across all ofassembly, and activation of the various components that the phases of the entire life cycle in the initial planningmake up the system. This strategy should use business as stages of the project, but also used recursively across allwell as technical factors to ensure an assembly, activation, of the life-cycle phases as the project product proceedsand loading sequence that minimizes cost and assembly through the flow down and flow up conveyed by the SEdifficulties. The larger or more complex the system or the engine. This ensures that when changes occur to require-more delicate the element, the more critical the proper ments, design concepts, etc.—usually in response to up-sequence becomes, as small changes can cause large im- dates from stakeholders and results from analysis, mod-pacts on project results. eling, or testing—that adequate course corrections are made to the project. This is accomplished through re-The optimal sequence of assembly is built from the evaluation by driving through the SE engine, enabling allbottom up as components become subelements, ele- aspects of the product integration activities to be appro-ments, and subsystems, each of which must be checked priately updated. The result is a product that meets all ofprior to fitting into the next higher assembly. The se- the new modifications approved by the project and elim-quence will encompass any effort needed to establish inates the opportunities for costly and time-consumingand equip the assembly facilities (e.g., raised floor, hoists, modifications in the later stages of the project.jigs, test equipment, input/output, and power connec-tions). Once established, the sequence must be period- 5.2.2.3 Product/Interface Integration Supportically reviewed to ensure that variations in production There are several processes that support the integration ofand delivery schedules have not had an adverse impact products and interfaces. Each process allows either the in-on the sequence or compromised the factors on which tegration of products and interfaces or the validation thatearlier decisions were made. the integrated products meet the needs of the project.5.2.2.2 Relationship to Product The following is a list of typical example processes and Implementation products that support the integration of products andAs previously described, Product Implementation is interfaces and that should be addressed by the projectwhere the plans, designs, analysis, requirements devel- in the overall approach to Product Integration: require-opment, and drawings are realized into actual products. ments documents; requirements reviews; design re-Product Integration concentrates on the control of the views; design drawings and specifications; integrationinterfaces and the verification and validation to achieve and test plans; hardware configuration control docu-80  NASA Systems Engineering Handbook
  • 95. 5.2 Product Integrationmentation; quality assurance records; interface control tween the system and another system (external) and mayrequirements/documents; ConOps documents; verifica- be functional or physical (e.g., mechanical, electrical) intion requirement documents; verification reports/anal- nature. Interface requirements are documented in an In-ysis; NASA, military, and industry standards; best prac- terface Requirements Document (IRD). Care should betices; and lessons learned. taken to define interface requirements and to avoid spec- ifying design solutions when creating the IRD. In its final5.2.2.4 Product Integration of the Design form, the Interface Control Document (ICD) describes Solution the detailed implementation of the requirements con-This subsection addresses the more specific implementa- tained in the IRD. An interface control plan describestion of Product Integration related to the selected design the management process for IRDs and ICDs. This plansolution. provides the means to identify and resolve interface in- compatibilities and to determine the impact of interfaceGenerally, system/product designs are an aggregation of design changes.subsystems and components. This is relatively obviousfor complex hardware and/or software systems. The same 5.2.2.6 Compatibility Analysisholds true for many service-oriented solutions. For ex- During the program’s life, compatibility and accessi-ample, a solution to provide a single person access to the bility must be maintained for the many diverse elements.Internet involves hardware, software, and a communica- Compatibility analysis of the interface definition dem-tions interface. The purpose of Product Integration is to onstrates completeness of the interface and traceabilityensure that combination of these elements achieves the records. As changes are made, an authoritative meansrequired result (i.e., works as expected). Consequently, of controlling the design of interfaces must be managedinternal and external interfaces must be considered in with appropriate documentation, thereby avoiding thethe design and evaluated prior to production. situation in which hardware or software, when integratedThere are a variety of different testing requirements to into the system, fails to function as part of the system asverify product integration at all levels. Qualification intended. Ensuring that all system pieces work togethertesting and acceptance testing are examples of two of is a complex task that involves teams, stakeholders, con-these test types that are performed as the product is in- tractors, and program management from the end of thetegrated. Another type of testing that is important to the initial concept definition stage through the operationsdesign and ultimate product integration is a planned test and support stage. Physical integration is accomplishedprocess in which development items are tested under ac- during Phase D. At the finer levels of resolution, piecestual or simulated mission profile environments to dis- must be tested, assembled and/or integrated, and testedclose design deficiencies and to provide engineering again. The systems engineer role includes performanceinformation on failure modes and mechanisms. If ac- of the delegated management duties such as configura-complished with development items, this provides early tion control and overseeing the integration, verification,insight into any issues that may otherwise only be ob- and validation processes.served at the late stages of product integration whereit becomes costly to incorporate corrective actions. For 5.2.2.7 Interface Management Taskslarge, complex system/products, integration/verification The interface management tasks begin early in the devel-efforts are accomplished using a prototype. opment effort, when interface requirements can be influ- enced by all engineering disciplines and applicable inter-5.2.2.5 Interface Management face standards can be invoked. They continue throughThe objective of the interface management is to achieve design and checkout. During design, emphasis is on en-functional and physical compatibility among all inter- suring that interface specifications are documented andrelated system elements. Interface management is de- communicated. During system element checkout, bothfined in more detail in Section 6.3. An interface is any prior to assembly and in the assembled configuration,boundary between one area and another. It may be cog- emphasis is on verifying the implemented interfaces.nitive, external, internal, functional, or physical. Inter- Throughout the product integration process activities,faces occur within the system (internal) as well as be- interface baselines are controlled to ensure that changes NASA Systems Engineering Handbook  81
  • 96. 5.0 Product Realizationin the design of system elements have minimal impact assembling the system in accordance with the intendedon other elements with which they interface. During design. The elements are checked for quantity, obvioustesting or other validation and verification activities, damage, and consistency between the element descrip-multiple system elements are checked out as integrated tion and a list of element requirements. Typical productssubsystems or systems. The following provides more de- include acceptance documents, delivery receipts, andtails on these tasks. checked packing list.Define Interfaces Verify System and Subsystem ElementsThe bulk of integration problems arise from unknown or System and subsystem element verification confirmsuncontrolled aspects of interfaces. Therefore, system and that the implemented design features of developed orsubsystem interfaces are specified as early as possible in purchased system elements meet their requirements.the development effort. Interface specifications address This is intended to ensure that each element of thelogical, physical, electrical, mechanical, human, and en- system or subsystem functions in its intended environ-vironmental parameters as appropriate. Intra-system in- ment, including those elements that are OTS for otherterfaces are the first design consideration for developers environments. Such verifications may be by test (e.g.,of the system’s subsystems. Interfaces are used from pre- regression testing as a tool or subsystem/elements arevious development efforts or are developed in accor- combined), inspection, analysis (deficiency or compli-dance with interface standards for the given discipline ance reports), or demonstration and may be executedor technology. Novel interfaces are constructed only for either by the organization that will assemble the systemcompelling reasons. Interface specifications are verified or subsystem or by the producing organization. Aagainst interface requirements. Typical products include method of discerning the elements that “passed” verifi-interface descriptions, ICDs, interface requirements, and cation from those elements that “failed” needs to be inspecifications. place. Typical products include verified system features and exception reports.Verify InterfacesIn verifying the interfaces, the systems engineer must en- Verify Element Interfacessure that the interfaces of each element of the system or Verification of the system element interfaces ensuressubsystem are controlled and known to the developers. that the elements comply with the interface specificationAdditionally, when changes to the interfaces are needed, prior to assembly in the system. The intent is to ensurethe changes must at least be evaluated for possible im- that the interface of each element of the system or sub-pact on other interfacing elements and then communi- system is verified against its corresponding interfacecated to the affected developers. Although all affected specification. Such verification may be by test, inspec-developers are part of the group that makes changes, tion, analysis, or demonstration and may be executedsuch changes need to be captured in a readily accessible by the organization that will assemble the system orplace so that the current state of the interfaces can be subsystem or by another organization. Typical prod-known to all. Typical products include ICDs and excep- ucts include verified system element interfaces, test re-tion reports. ports, and exception reports.The use of emulators for verifying hardware and soft-ware interfaces is acceptable where the limitations of the Integrate and Verifyemulator are well characterized and meet the operating Assembly of the elements of the system should be per-environment characteristics and behavior requirements formed in accordance with the established integrationfor interface verification. The integration plan should strategy. This ensures that the assembly of the system el-specifically document the scope of use for emulators. ements into larger or more complex assemblies is con- ducted in accordance with the planned strategy. ToInspect and Acknowledge System and Subsystem ensure that the integration has been completed, a verifi-Element Receipt cation of the integrated system interfaces should be per-Acknowledging receipt and inspecting the condition of formed. Typical products include integration reports,each system or subsystem element is required prior to exception reports, and an integrated system.82  NASA Systems Engineering Handbook
  • 97. 5.3 Product VerificationThe Product Verification Process is the first of the verifi-cation and validation processes conducted on a realized Differences Between Verification andend product. As used in the context of the systems engi- Validation Testingneering common technical processes, a realized product Verification Testingis one provided by either the Product Implementation Verification testing relates back to the approved re-Process or the Product Integration Process in a form quirements set (such as an SRD) and can be per-suitable for meeting applicable life-cycle phase success formed at different stages in the product life cycle.criteria. Realization is the act of verifying, validating, and Verification testing includes: (1) any testing used totransitioning the realized product for use at the next level assist in the development and maturation of prod-up of the system structure or to the customer. Simply ucts, product elements, or manufacturing or support processes; and/or (2) any engineering-type test usedput, the Product Verification Process answers the crit- to verify the status of technical progress, verify thatical question, Was the end product realized right? The design risks are minimized, substantiate achievementProduct Validation Process addresses the equally critical of contract technical performance, and certify readi-question, Was the right end product realized? ness for initial validation testing. Verification tests use instrumentation and measurements and are gener-Verification proves that a realized product for any system ally accomplished by engineers, technicians, or op-model within the system structure conforms to the build- erator-maintainer test personnel in a controlled envi-to requirements (for software elements) or realize-to spec- ronment to facilitate failure analysis.ifications and design descriptive documents (for hardware Validation Testingelements, manual procedures, or composite products of Validation relates back to the ConOps document. Vali-hardware, software, and manual procedures). dation testing is conducted under realistic conditions (or simulated conditions) on any end product to de-Distinctions Between Product Verification and termine the effectiveness and suitability of the prod-Product Validation uct for use in mission operations by typical users andFrom a process perspective, product verification and val- to evaluate the results of such tests. Testing is the de-idation may be similar in nature, but the objectives are tailed quantifying method of both verification andfundamentally different. validation. However, testing is required to validate fi- nal end products to be produced and deployed.It is essential to confirm that the realized product is inconformance with its specifications and design descrip-tion documentation (i.e., verification). Such specifica- to its specified requirements, i.e., verification of the endtions and documents will establish the configuration product. This subsection discusses the process activities,baseline of that product, which may have to be modified inputs, outcomes, and potential deficiencies.at a later time. Without a verified baseline and appro-priate configuration controls, such later modifications 5.3.1 Process Descriptioncould be costly or cause major performance problems. Figure 5.3-1 provides a typical flow diagram for theHowever, from a customer point of view, the interest is in Product Verification Process and identifies typical in-whether the end product provided will do what the cus- puts, outputs, and activities to consider in addressingtomer intended within the environment of use (i.e., vali- product verification.dation). When cost effective and warranted by analysis,the expense of validation testing alone can be mitigated 5.3.1.1 Inputsby combining tests to perform verification and valida- Key inputs to the process are the product to be verified,tion simultaneously. verification plan, specified requirements baseline, andThe outcome of the Product Verification Process is any enabling products needed to perform the Productconfirmation that the “as-realized product,” whether Verification Process (including the ConOps, missionachieved by implementation or integration, conforms needs and goals, requirements and specifications, in- NASA Systems Engineering Handbook  83
  • 98. 5.0 Product Realization From Product Implementation or Product Integration Process To Product End Product Validation Process to Be Verified Prepare to conduct product verification Verified End Product From Con guration Management Process Perform the product To Technical Specified Requirements verification Assessment Process Baseline Product Verification From Design Solution Results Analyze the outcomes of De nition and Technical the product verification Planning Processes To Technical Data Product Verification Management Process Plan Prepare a product Product Verification verification report Report From existing resources or Product Transition Process Product Verification Capture the work products Product Verification– Work Products from product verification Enabling Products Figure 5.3‑1 Product Verification Processterface control drawings, testing standards and policies, ments (specifications and descriptive documentation)and Agency standards and policies). used for making or assembling and integrating the end product. Developers of the system, as well as the users,5.3.1.2 Process Activities are typically involved in verification testing. The cus-There are five major steps in the Product Verification tomer and Quality Assurance (QA) personnel are alsoProcess: (1) verification planning (prepare to implement critical in the verification planning and execution ac-the verification plan); (2) verification preparation (pre- tivities.pare for conducting verification); (3) conduct verifica-tion (perform verification); (4) analyze verification re- Product Verification Planningsults; and (5) capture the verification work products. Planning to conduct the product verification is a key first step. From relevant specifications and product form, theThe objective of the Product Verification Process is to type of verification (e.g., analysis, demonstration, inspec-generate evidence necessary to confirm that end prod- tion, or test) should be established based on the life-cycleucts, from the lowest level of the system structure to the phase, cost, schedule, resources, and the position of thehighest, conform to the specified requirements (specifi- end product within the system structure. The verifica-cations and descriptive documents) to which they were tion plan should be reviewed (an output of the Technicalrealized whether by the Product Implementation Pro- Planning Process, based on design solution outputs) forcess or by the Product Integration Process. any specific procedures, constraints, success criteria, orProduct Verification is usually performed by the devel- other verification requirements. (See Appendix I for aoper that produced (or “realized”) the end product, with sample verification plan outline.)participation of the end user and customer. ProductVerification confirms that the as-realized product, Verification Plan and Methodswhether it was achieved by Product Implementation or The task of preparing the verification plan includes es-Product Integration, conforms to its specified require- tablishing the type of verification to be performed, de-84  NASA Systems Engineering Handbook
  • 99. 5.3 Product Verification Types of Testing There are many different types of testing that can be used in verification of an end product. These examples are pro- vided for consideration:  Aerodynamic  Acceptance  Acoustic  Burn-in  Characterization  Component  Drop  Electromagnetic Compatibility  Electromagnetic Interference  Environmental  G-loading  Go or No-Go  High-/Low-Voltage Limits  Human Factors Engineering/  Integration Human-in-the-Loop Testing  Leak Rates  Lifetime/Cycling  Manufacturing/Random Defects  Nominal  Off-Nominal  Operational  Parametric  Performance  Pressure Cycling  Pressure Limits  Qualification Flow  Structural Functional  Security Checks  System  Thermal Cycling  Thermal Limits  Thermal Vacuum  Vibrationpendent on the life-cycle phase; position of the product  Flight units (end product that is flown, including proto-in the system structure; the form of the product used; flight units).and related costs of verification of individual specifiedrequirements. The types of verification include analyses, Any of these types of product forms may be in any ofinspection, demonstration, and test or some combina- these states:tion of these four. The verification plan, typically written  Produced (built, fabricated, manufactured, or coded);at a detailed technical level, plays a pivotal role in bottom-  Reused (modified internal nondevelopmental prod-up product realization. ucts or OTS product); and  Assembled and integrated (a composite of lower level products). Note: Close alignment of the verification plan with the project’s SEMP is absolutely essential. The conditions and environment under which the product is to be verified should be established and the verification planned based on the associated entrance/Verification can be performed recursively throughout success criteria identified. The Decision Analysis Processthe project life cycle and on a wide variety of product should be used to help finalize the planning details.forms. For example: Procedures should be prepared to conduct verification Simulated (algorithmic models, virtual reality simu- based on the type (e.g., analysis, inspection, demonstra- lator); tion, or test) planned. These procedures are typically de- Mockup (plywood, brass board, breadboard); veloped during the design phase of the project life cycle Concept description (paper report); and matured as the design is matured. Operational use Prototype (product with partial functionality); Engineering unit (fully functional but may not be same form/fit); Note: The final, official verification of the end prod- uct should be for a controlled unit. Typically, attempt- Design verification test units (form, fit, and function ing to “buy off” a “shall” on a prototype is not accept- is the same, but they may not have flight parts); able; it is usually completed on a qualification, flight, Qualification units (identical to flight units but may or other more final, controlled unit. be subjected to extreme environments); and NASA Systems Engineering Handbook  85
  • 100. 5.0 Product Realization Types of Verification  Analysis: The use of mathematical modeling and analytical techniques to predict the suitability of a design to stake- holder expectations based on calculated data or data derived from lower system structure end product verifications. Analysis is generally used when a prototype; engineering model; or fabricated, assembled, and integrated product is not available. Analysis includes the use of modeling and simulation as analytical tools. A model is a mathematical rep- resentation of reality. A simulation is the manipulation of a model.  Demonstration: Showing that the use of an end product achieves the individual specified requirement. It is gener- ally a basic confirmation of performance capability, differentiated from testing by the lack of detailed data gathering. Demonstrations can involve the use of physical models or mockups; for example, a requirement that all controls shall be reachable by the pilot could be verified by having a pilot perform flight-related tasks in a cockpit mockup or sim- ulator. A demonstration could also be the actual operation of the end product by highly qualified personnel, such as test pilots, who perform a one-time event that demonstrates a capability to operate at extreme limits of system per- formance, an operation not normally expected from a representative operational pilot.  Inspection: The visual examination of a realized end product. Inspection is generally used to verify physical design features or specific manufacturer identification. For example, if there is a requirement that the safety arming pin has a red flag with the words “Remove Before Flight” stenciled on the flag in black letters, a visual inspection of the arming pin flag can be used to determine if this requirement was met.  Test: The use of an end product to obtain detailed data needed to verify performance, or provide sufficient informa- tion to verify performance through further analysis. Testing can be conducted on final end products, breadboards, brass boards or prototypes. Testing produces data at discrete points for each specified requirement under controlled conditions and is the most resource-intensive verification technique. As the saying goes, “Test as you fly, and fly as you test.” (See Subsection 5.3.2.5.)scenarios are thought through so as to explore all pos- ification actions, and (4) the criteria for determiningsible verification activities to be performed. the success or failure of the procedure.  The verification environment (e.g., facilities, equip-Outcomes of verification planning include the following: ment, tools, simulations, measuring devices, per- The verification type that is appropriate for showing sonnel, and climatic conditions) in which the verifi- or proving the realized product conforms to its speci- cation procedures will be implemented is defined. fied requirements is selected.  As appropriate, project risk items are updated based The product verification procedures are clearly de- on approved verification strategies that cannot du- fined based on: (1) the procedures for each type of plicate fully integrated test systems, configurations, verification selected, (2) the purpose and objective of and/or target operating environments. Rationales, each procedure, (3) any pre-verification and post-ver- trade space, optimization results, and implications of the approaches are documented in the new or revised Note: Verification planning is begun early in the proj- risk statements as well as references to accommodate ect life cycle during the requirements development future design, test, and operational changes to the phase. (See Section 4.2.) Which verification approach project baseline. to use should be included as part of the requirements development to plan for the future activities, estab- Product Verification Preparation lish special requirements derived from verification- In preparation for verification, the specified require- enabling products identified, and to ensure that the ments (outputs of the Design Solution Process) are col- technical statement is a verifiable requirement. Up- dates to verification planning continue throughout lected and confirmed. The product to be verified is ob- logical decomposition and design development, es- tained (output from implementation or integration), as pecially as design reviews and simulations shed light are any enabling products and support resources that on items under consideration. (See Section 6.1.) are necessary for verification (requirements identified and acquisition initiated by design solution definition86  NASA Systems Engineering Handbook
  • 101. 5.3 Product Verificationactivities). The final element of verification preparation mance established to each specified verification require-includes the preparation of the verification environment ment. The responsible engineer should ensure that the(e.g., facilities, equipment, tools, simulations, measuring procedures were followed and performed as planned,devices, personnel, and climatic conditions). Identifica- the verification-enabling products were calibrated cor-tion of the environmental requirements is necessary and rectly, and the data were collected and recorded for re-the implications of those requirements must be carefully quired verification measures.considered. The Decision Analysis Process should be used to help make decisions with respect to making needed changes Note: Depending on the nature of the verification ef- in the verification plans, environment, and/or conduct. fort and the life-cycle phase the program is in, some type of review to assess readiness for verification (as Outcomes of conducting verification include the follow- well as validation later) is typically held. In earlier ing: phases of the life cycle, these reviews may be held in-  A verified product is established with supporting con- formally; in later phases of the life cycle, this review firmation that the appropriate results were collected becomes a formal event called a Test Readiness Re- and evaluated to show completion of verification ob- view. TRRs and other technical reviews are an activity of the Technical Assessment Process. jectives,  A determination as to whether the realized end On most projects, a number of TRRs with tailored en- product (in the appropriate form for the life-cycle trance/success criteria are held to assess the readiness and availability of test ranges; test facilities; trained phase) complies with its specified requirements, testers; instrumentation; integration labs; support  A determination that the verification product was ap- equipment; and other enabling products; etc. propriately integrated with the verification environ- Peer reviews are additional reviews that may be con- ment and each specified requirement was properly ducted formally or informally to ensure readiness for verified, and verification (as well as the results of the verification  A determination that product functions were veri- process). fied both together and with interfacing products throughout their performance envelope.Outcomes of verification preparation include the follow-ing: Analyze Product Verification Results The preparations for performing the verification as Once the verification activities have been completed, planned are completed; the results are collected and analyzed. The data are an- alyzed for quality, integrity, correctness, consistency, An appropriate set of specified requirements and sup- and validity, and any verification anomalies, variations, porting configuration documentation is available and and out-of-compliance conditions are identified and re- on hand; viewed. Articles/models to be used for verification are on hand, assembled, and integrated with the verifica- Variations, anomalies, and out-of-compliance condi- tion environment according to verification plans and tions must be recorded and reported for followup action schedules; and closure. Verification results should be recorded in The resources needed to conduct the verification the requirements compliance matrix developed during are available according to the verification plans and the Technical Requirements Definition Process or other schedules; and mechanism to trace compliance for each verification re- quirement. The verification environment is evaluated for ade- quacy, completeness, readiness, and integration. System design and product realization process activities may be required to resolve anomalies not resulting fromConduct Planned Product Verification poor verification conduct, design, or conditions. If thereThe actual act of verifying the end product is conducted are anomalies not resulting from the verification con-as spelled out in the plans and procedures and confor- duct, design, or conditions, and the mitigation of these NASA Systems Engineering Handbook  87
  • 102. 5.0 Product Realizationanomalies results in a change to the product, the verifica-  Reengineering products lower in the system structuretion may need to be planned and conducted again. that make up the product that were found to be de- fective (i.e., they failed to satisfy verification require-Outcomes of analyzing the verification results include ments) and/orthe following:  Reperforming the Product Verification Process. End-product variations, anomalies, and out-of-com- pliance conditions have been identified; Pass Verification But Fail Validation? Appropriate replanning, redefinition of requirements, design and reverification have been accomplished for Many systems successfully complete verification but then resolution for anomalies, variations, or out-of-com- are unsuccessful in some critical phase of the validation pliance conditions (for problems not caused by poor process, delaying development and causing extensive re- verification conduct); work and possible compromises with the stakeholder. Variances, discrepancies, or waiver conditions have Developing a solid ConOps in early phases of the project been accepted or dispositioned; (and refining it through the requirements development and design phases) is critical to preventing unsuccessful Discrepancy and corrective action reports have been validation. Communications with stakeholders helps to generated as needed; and identify operational scenarios and key needs that must The verification report is completed. be understood when designing and implementing the end product. Should the product fail validation, rede-Reengineering sign may be a necessary reality. Review of the under-Based on analysis of verification results, it could be nec- stood requirements set, the existing design, operationalessary to re-realize the end product used for verification scenarios, and support material may be necessary, asor to reengineer the end products assembled and inte- well as negotiations and compromises with the cus-grated into the product being verified, based on where tomer, other stakeholders, and/or end users to deter-and what type of defect was found. mine what, if anything, can be done to correct or re-Reengineering could require the reapplication of the solve the situation. This can add time and cost to thesystem design processes (Stakeholder Expectations Def- overall project or, in some cases, cause the project toinition, Technical Requirements Definition, Logical De- fail or be cancelled.composition, and Design Solution Definition). Capture Product Verification Work ProductsVerification Deficiencies Verification work products (inputs to the Technical DataVerification test outcomes can be unsatisfactory for sev- Management Process) take many forms and involveeral reasons. One reason is poor conduct of the verifica- many sources of information. The capture and recordingtion (e.g., procedures not followed, equipment not cali- of verification results and related data is a very impor-brated, improper verification environmental conditions, tant, but often underemphasized, step in the Productor failure to control other variables not involved in veri- Verification Process.fying a specified requirement). A second reason couldbe that the realized end product used was not realized Verification results, anomalies, and any correctivecorrectly. Reapplying the system design processes could action(s) taken should be captured, as should all relevantcreate the need for the following: results from the application of the Product Verification Process (related decisions, rationale for the decisions made, assumptions, and lessons learned). Note: Nonconformances and discrepancy reports may be directly linked with the Technical Risk Man- Outcomes of capturing verification work products in- agement Process. Depending on the nature of the clude the following: nonconformance, approval through such bodies as a material review board or configuration control board  Verification of work products are recorded, e.g., type (which typically includes risk management participa- of verification, procedures, environments, outcomes, tion) may be required. decisions, assumptions, corrective actions, lessons learned.88  NASA Systems Engineering Handbook
  • 103. 5.3 Product Verification Variations, anomalies, and out-of-compliance condi- ▶ NASA payload classification (NPR 8705.4, Risk tions have been identified and documented, including Classification for NASA Payloads). Guidelines are the actions taken to resolve them. intended to serve as a starting point for establish- Proof that the realized end product did or did not sat- ment of the formality of test programs which can be isfy the specified requirements is documented. tailored to the needs of a specific project based on the “A-D” payload classification. The verification report is developed, including: ▶ Project cost and schedule implications. Verifi- ▶ Recorded test/verification results/data; cation activities can be significant drivers of a ▶ Version of the set of specified requirements used; project’s cost and schedule; these implications ▶ Version of the product verified; should be considered early in the development ▶ Version or standard for tools, data, and equipment of the verification program. Trade studies should used; be performed to support decisions about verifi- ▶ Results of each verification including pass or fail cation methods and requirements and the selec- declarations; and tion of facility types and locations. For example, a trade study might be made to decide between per- ▶ Expected versus actual discrepancies. forming a test at a centralized facility or at several decentralized locations.5.3.1.3 Outputs ▶ Risk implications. Risk management must be con-Key outputs from the process are: sidered in the development of the verification pro- Discrepancy reports and identified corrective actions; gram. Qualitative risk assessments and quantitative Verified product to validation or integration; and risk analyses (e.g., a Failure Mode and Effects Anal- Verification report(s) and updates to requirements ysis (FMECA)) often identify new concerns that can compliance documentation (including verification be mitigated by additional testing, thus increasing plans, verification procedures, verification matrices, the extent of verification activities. Other risk as- verification results and analysis, and test/demonstra- sessments contribute to trade studies that determine tion/inspection/analysis records). the preferred methods of verification to be used and when those methods should be performed. For ex-Success criteria include: (1) documented objective evi- ample, a trade might be made between performingdence of compliance (or waiver, as appropriate) with a model test versus determining model characteris-each system-of-interest requirement and (2) closure of tics by a less costly, but less revealing, analysis. Theall discrepancy reports. The Product Verification Pro- project manager/systems engineer must determinecess is not considered or designated complete until all what risks are acceptable in terms of the project’sdiscrepancy reports are closed (i.e., all errors tracked to cost and schedule.closure).  Availability of verification facilities/sites and trans- portation assets to move an article from one location5.3.2 Product Verification Guidance to another (when needed). This requires coordination5.3.2.1 Verification Program with the Integrated Logistics Support (ILS) engineer.A verification program should be tailored to the project  Acquisition strategy (i.e., in-house development orit supports. The project manager/systems engineer must system contract). Often, a NASA field center canwork with the verification engineer to develop a verifi- shape a contractor’s verification process through thecation program concept. Many factors need to be con- project’s SOW.sidered in developing this concept and the subsequent  Degree of design inheritance and hardware/softwareverification program. These factors include: reuse. Project type, especially for flight projects. Verification methods and timing depend on: 5.3.2.2 Verification in the Life Cycle ▶ The type of flight article involved (e.g., an experi- The type of verification completed will be a function of ment, payload, or launch vehicle). the life-cycle phase and the position of the end product NASA Systems Engineering Handbook  89
  • 104. 5.0 Product Realizationwithin the system structure. The end product must be The QA engineer typically monitors the resolution andverified and validated before it is transitioned to the next closeout of nonconformances and problem/failure re-level up as part of the bottom-up realization process. ports; verifies that the physical configuration of the(See Figure 5.3-2.) system conforms to the build-to (or code-to) documen- tation approved at CDR; and collects and maintains QAWhile illustrated here as separate processes, there can be data for subsequent failure analyses. The QA engineer alsoconsiderable overlap between some verification and vali- participates in major reviews (primarily SRR, PDR, CDR,dation events when implemented. and FRR) on issues of design, materials, workmanship, fabrication and verification processes, and other charac-Quality Assurance in Verification teristics that could degrade product system quality.Even with the best of available designs, hardware fabri-cation, software coding, and testing, projects are subject The project manager/systems engineer must work withto the vagaries of nature and human beings. The systems the QA engineer to develop a QA program (the extent,engineer needs to have some confidence that the system responsibility, and timing of QA activities) tailored toactually produced and delivered is in accordance with its the project it supports. In part, the QA program ensuresfunctional, performance, and design requirements. QA verification requirements are properly specified, espe-provides an independent assessment to the project man- cially with respect to test environments, test configura-ager/systems engineer of the items produced and pro- tions, and pass/fail criteria, and monitors qualificationcesses used during the project life cycle. The QA engi- and acceptance tests to ensure compliance with verifica-neer typically acts as the systems engineer’s eyes and ears tion requirements and test procedures to ensure that testin this context. data are correct and complete. To end user/ use environment Verify against end product Tier 1 Deliver veri ed speci ed End Product end product requirements Validate against stakeholder expectations and ConOps Verify against end product Tier 2 Deliver veri ed speci ed End Product end product requirements Validate against stakeholder expectations and ConOps Verify against end product Tier 3 Deliver veri ed speci ed End Product end product requirements Validate against stakeholder Verify against expectations and ConOps end product Tier 4 speci ed End Product Deliver veri ed requirements end product Validate against stakeholder expectations and ConOps Verify against end product Tier 5 End Product Deliver veri ed speci ed end product requirements Figure 5.3‑2 Bottom‑up realization process90  NASA Systems Engineering Handbook
  • 105. 5.3 Product VerificationConfiguration Verification ware are in compliance with all functional, performance,Configuration verification is the process of verifying that and design requirements and are ready for shipment toresulting products (e.g., hardware and software items) the launch site. The acceptance stage begins with the ac-conform to the baselined design and that the baseline ceptance of each individual component or piece part fordocumentation is current and accurate. Configuration assembly into the flight/operations article, continuingverification is accomplished by two types of control gate through the System Acceptance Review (SAR). (Seeactivity: audits and technical reviews. Subsection 6.7.2.1.) Some verifications cannot be performed after a flight/Qualification Verification operations article, especially a large one, has been assem-Qualification-stage verification activities begin after bled and integrated (e.g., due to inaccessibility). Whencompletion of development of the flight/operations hard- this occurs, these verifications are to be performed duringware designs and include analyses and testing to ensure fabrication and integration, and are known as “in-pro-that the flight/operations or flight-type hardware (and cess” tests. In this case, acceptance testing begins with in-software) will meet functional and performance require- process testing and continues through functional testing,ments in anticipated environmental conditions. During environmental testing, and end-to-end compatibilitythis stage, many performance requirements are verified, testing. Functional testing normally begins at the com-while analyses and models are updated as test data are ac- ponent level and continues at the systems level, endingquired. Qualification tests generally are designed to sub- with all systems operating simultaneously.ject the hardware to worst-case loads and environmentalstresses plus a defined level of margin. Some of the veri- When flight/operations hardware is unavailable, or itsfications performed to ensure hardware compliance are use is inappropriate for a specific test, simulators may bevibration/acoustic, pressure limits, leak rates, thermal used to verify interfaces. Anomalies occurring duringvacuum, thermal cycling, Electromagnetic Interference a test are documented on the appropriate reportingand Electromagnetic Compatibility (EMI/EMC), high- system, and a proposed resolution should be defined be-and low-voltage limits, and lifetime/cycling. Safety re- fore testing continues. Major anomalies, or those that arequirements, defined by hazard analysis reports, may also not easily dispositioned, may require resolution by a col-be satisfied by qualification testing. laborative effort of the systems engineer and the design,Qualification usually occurs at the component or sub- test, and other organizations. Where appropriate, anal-system level, but could occur at the system level as well. A yses and models are validated and updated as test dataproject deciding against building dedicated qualification are acquired.hardware—and using the flight/operations hardware it- Acceptance verification verifies workmanship, not de-self for qualification purposes—is termed “protoflight.” sign. Test levels are set to stress items so that failures ariseHere, the requirements being verified are typically less from defects in parts, materials, and workmanship. Asthan that of qualification levels but higher than that of such, test levels are those anticipated during flight/op-acceptance levels. erations with no additional margin.Qualification verification verifies the soundness of thedesign. Test levels are typically set with some margin Deployment Verificationabove expected flight/operations levels, including the The pre-launch verification stage begins with the arrivalmaximum number of cycles that can be accumulated of the flight/operations article at the launch site and con-during acceptance testing. These margins are set to ad- cludes at liftoff. During this stage, the flight/operationsdress design safety margins in general, and care should article is processed and integrated with the launch ve-be exercised not to set test levels so that unrealistic failure hicle. The launch vehicle could be the shuttle or somemodes are created. other launch vehicle, or the flight/operations article could be part of the launch vehicle. Verifications per-Acceptance Verification formed during this stage ensure that no visible damageAcceptance-stage verification activities provide the as- to the system has occurred during shipment and that thesurance that the flight/operations hardware and soft- system continues to function properly. NASA Systems Engineering Handbook  91
  • 106. 5.0 Product RealizationIf system elements are shipped separately and integrated ness Review (TRR) for the verification activity. (See Testat the launch site, testing of the system and system in- Readiness Review discussion in Subsection 6.7.2.1.)terfaces is generally required. If the system is integratedinto a carrier, the interface to the carrier must also be Procedures are also used to verify the acceptance of fa-verified. Other verifications include those that occur fol- cilities, electrical and mechanical ground support equip-lowing integration into the launch vehicle and those that ment, and special test equipment. The information gen-occur at the launch pad; these are intended to ensure that erally contained in a procedure is as follows, but it maythe system is functioning and in its proper launch con- vary according to the activity and test article:figuration. Contingency verifications and procedures are  Nomenclature and identification of the test article ordeveloped for any contingencies that can be foreseen to material;occur during pre-launch and countdown. These contin-  Identification of test configuration and any differencesgency verifications and procedures are critical in that from flight/operations configuration;some contingencies may require a return of the launch  Identification of objectives and criteria established forvehicle or flight/operations article from the launch pad the test by the applicable verification specification;to a processing facility.  Characteristics and design criteria to be inspected or tested, including values, with tolerances, for accep-Operational and Disposal Verification tance or rejection;Operational verification begins in Phase E and provides  Description, in sequence, of steps and operations tothe assurance that the system functions properly in a rel- be taken;evant environment. These verifications are performed  Identification of computer software required;through system activation and operation, rather than  Identification of measuring, test, and recording equip-through a verification activity. Systems that are assem- ment to be used, specifying range, accuracy, and type;bled on-orbit must have each interface verified and mustfunction properly during end-to-end testing. Mechan-  Credentials showing that required computer test pro-ical interfaces that provide fluid and gas flow must be grams/support equipment and software have beenverified to ensure no leakage occurs and that pressures verified prior to use with flight/operations hardware;and flow rates are within specification. Environmental  Any special instructions for operating data recordingsystems must be verified. equipment or other automated test equipment as ap- plicable;Disposal verification provides the assurance that the  Layouts, schematics, or diagrams showing identifica-safe deactivation and disposal of all system products tion, location, and interconnection of test equipment,and processes has occurred. The disposal stage begins in test articles, and measuring points;Phase F at the appropriate time (i.e., either as scheduled,  Identification of hazardous situations or operations;or earlier in the event of premature failure or accident)  Precautions and safety instructions to ensure safety ofand concludes when all mission data have been acquired personnel and prevent degradation of test articles andand verifications necessary to establish compliance with measuring equipment;disposal requirements are finished.  Environmental and/or other conditions to be main-Both operational and disposal verification activities may tained with tolerances;also include validation assessments, that is, assessments  Constraints on inspection or testing;of the degree to which the system accomplished the de-  Special instructions for nonconformances and anom-sired mission goals/objectives. alous occurrences or results; and  Specifications for facility, equipment maintenance,5.3.2.3 Verification Procedures housekeeping, quality inspection, and safety and han-Verification procedures provide step-by-step instructions dling requirements before, during, and after the totalfor performing a given verification activity. This proce- verification activity.dure could be a test, demonstration, or any other verifica-tion-related activity. The procedure to be used is written The written procedure may provide blank spaces in theand submitted for review and approval at the Test Readi- format for the recording of results and narrative com-92  NASA Systems Engineering Handbook
  • 107. 5.3 Product Verificationments so that the completed procedure can serve as partof the verification report. The as-run and certified copy Note: It is important to understand that, over the life-of the procedure is maintained as part of the project’s ar- time of a system, requirements may change or com-chives. ponent obsolescence may make a design solution too difficult to produce from either a cost or technical standpoint. In these instances, it is critical to employ5.3.2.4 Verification Reports the systems engineering design processes at a lowerA verification report should be provided for each anal- level to ensure the modified design provides a properysis and, at a minimum, for each major test activity— design solution. An evaluation should be made to de-such as functional testing, environmental testing, and termine the magnitude of the change required, andend-to-end compatibility testing—occurring over long the process should be tailored to address the issuesperiods of time or separated by other activities. Verifi- appropriately. A modified qualification, verification,cation reports may be needed for each individual test and validation process may be required to baseline a new design solution, consistent with the intent previ-activity, such as functional testing, acoustic testing, vi- ously described for those processes. The acceptancebration testing, and thermal vacuum/thermal balance testing will also need to be updated as necessary totesting. Verification reports should be completed within verify that the new product has been manufactureda few weeks following a test and should provide evidence and coded in compliance with the revised baselinedof compliance with the verification requirements for design.which it was conducted.The verification report should include as appropriate: Verification objectives and the degree to which they and within a system as a whole. End-to-end tests per- were met; formed on the integrated ground and flight system in- Description of verification activity; clude all elements of the payload, its control, stimulation, Test configuration and differences from flight/opera- communications, and data processing to demonstrate tions configuration; that the entire system is operating in a manner to fulfill all mission requirements and objectives. Specific result of each test and each procedure, in- cluding annotated tests; End-to-end testing includes executing complete threads Specific result of each analysis; or operational scenarios across multiple configuration Test performance data tables, graphs, illustrations, items, ensuring that all mission and performance re- and pictures; quirements are verified. Operational scenarios are used extensively to ensure that the system (or collections of Descriptions of deviations from nominal results, systems) will successfully execute mission requirements. problems/failures, approved anomaly corrective ac- Operational scenarios are a step-by-step description of tions, and retest activity; how the system should operate and interact with its users Summary of nonconformance/discrepancy reports, and its external interfaces (e.g., other systems). Scenarios including dispositions; should be described in a manner that will allow engi- Conclusions and recommendations relative to success neers to walk through them and gain an understanding of verification activity; of how all the various parts of the system should function Status of support equipment as affected by test; and interact as well as verify that the system will satisfy the user’s needs and expectations. Operational scenarios Copy of as-run procedure; and should be described for all operational modes, mis- Authentication of test results and authorization of ac- sion phases (e.g., installation, startup, typical examples ceptability. of normal and contingency operations, shutdown, and maintenance), and critical sequences of activities for all5.3.2.5 End-to-End System Testing classes of users identified. Each scenario should includeThe objective of end-to-end testing is to demonstrate events, actions, stimuli, information, and interactions asinterface compatibility and desired total functionality appropriate to provide a comprehensive understandingamong different elements of a system, between systems, of the operational aspects of the system. NASA Systems Engineering Handbook  93
  • 108. 5.0 Product RealizationFigure 5.3-3 presents an example of an end-to-end data How to Perform End‑to‑End Testingflow for a scientific satellite mission. Each arrow in the End-to-end testing is probably the most significant el-diagram represents one or more data or control flows ement of any project verification program and the testbetween two hardware, software, subsystem, or system should be designed to satisfy the edict to “test the wayconfiguration items. End-to-end testing verifies that the we fly.” This means assembling the system in its real-data flows throughout the multisystem environment are istic configuration, subjecting it to a realistic environ-correct, that the system provides the required function- ment and then “flying” it through all of its expected op-ality, and that the outputs at the eventual end points cor- erational modes. For a scientific robotic mission, targetsrespond to expected results. Since the test environment is and stimuli should be designed to provide realistic in-as close an approximation as possible to the operational puts to the scientific instruments. The output signalsenvironment, performance requirements testing is also from the instruments would flow through the satelliteincluded. This figure is not intended to show the full ex- data-handling system and then be transmitted to thetent of end-to-end testing. Each system shown would actual ground station through the satellite communica-need to be broken down into a further level of granu- tions system. If data are transferred to the ground stationlarity for completeness. through one or more satellite or ground relays (e.g., the Tracking and Data Relay Satellite System (TDRSS)) thenEnd-to-end testing is an integral part of the verification those elements must be included as part of the test.and validation of the total system and is an activity thatis employed during selected hardware, software, and The end-to-end compatibility test encompasses the en-system phases throughout the life cycle. In comparison tire chain of operations that will occur during all missionwith configuration item testing, end-to-end testing ad- modes in such a manner as to ensure that the system willdresses each configuration item only down to the level fulfill mission requirements. The mission environmentwhere it interfaces externally to other configuration should be simulated as realistically as possible, and theitems, which can be either hardware, software, or human instruments should receive stimuli of the kind they willbased. Internal interfaces (e.g., software subroutine calls, receive during the mission. The Radio Frequency (RF)analog-to-digital conversion) of a configuration item are links, ground station operations, and software functionsnot within the scope of end-to-end testing. should be fully exercised. When acceptable simulation EXTERNAL GROUND SYSTEM EXTERNAL FLIGHT SYSTEM SYSTEMS STIMULI Uplink Process X-rays Instrument Set A Mission Command Planning Transmission Visible Planning Generation Software Spacecraft Loads Ultraviolet Execution Scienti c Data Infrared Archival Analysis Community Capture Instrument Set B Downlink Process Microwave Figure 5.3‑3 Example of end‑to‑end data flow for a scientific satellite mission94  NASA Systems Engineering Handbook
  • 109. 5.3 Product Verificationfacilities are available for portions of the operational sys-  Mission phase, mode, and state transitions;tems, they may be used for the test instead of the actual  First-time events;system elements. The specific environments under which  Operational performance limits;the end-to-end test is conducted and the stimuli, pay-  Fault protection routines;load configuration, RF links, and other system elements  Failure Detection, Isolation, and Recovery (FDIR)to be used must be determined in accordance with the logic;characteristics of the mission.  Safety properties;  Operational responses to transient or off-nom-Although end-to-end testing is probably the most com- inal sensor signals; andplex test in any system verification program, the same  Communication uplink and downlink.careful preparation is necessary as for any other system-level test. For example, a test lead must be appointed and 2. The operational scenarios are used to test the con-the test team selected and trained. Adequate time must figuration items, interfaces, and end-to-end perfor-be allocated for test planning and coordination with the mance as early as possible in the configuration items’design team. Test procedures and test software must be development life cycle. This typically means simula-documented, approved, and placed under configuration tors or software stubs have to be created to imple-control. ment a full scenario. It is extremely important to produce a skeleton of the actual system to run fullPlans, agreements, and facilities must be put in place well scenarios as soon as possible with both simulated/in advance of the test to enable end-to-end testing be- stubbed-out and actual configuration items.tween all components of the system. 3. A complete diagram and inventory of all interfacesOnce the tests are run, the test results are documented are documented.and any discrepancies carefully recorded and reported. 4. Test cases are executed to cover human-human,All test data must be maintained under configuration human-hardware, human-software, hardware-soft-control. ware, software-software, and subsystem-subsystem interfaces and associated inputs, outputs, and modes of operation (including safing modes). Note: This is particularly important when missions are 5. It is strongly recommended that during end-to-end developed with international or external partners. testing, an operations staff member who has not pre- viously been involved in the testing activities be des- ignated to exercise the system as it is intended to beBefore completing end-to-end testing, the following ac- used to determine if it will fail.tivities are completed for each configuration item: 6. The test environment should approximate/simulate All requirements, interfaces, states, and state tran- the actual operational conditions when possible. The sitions of each configuration item should be tested fidelity of the test environment should be authenti- through the exercise of comprehensive test proce- cated. Differences between the test and operational dures and test cases to ensure the configuration items environment should be documented in the test or are complete and correct. verification plan. A full set of operational range checking tests should 7. When testing of a requirement is not possible, veri- be conducted on software variables to ensure that the fication is demonstrated by other means (e.g., model software performs as expected within its complete checking, analysis, or simulation). If true end-to-end range and fails, or warns, appropriately for out-of- testing cannot be achieved, then the testing must range values or conditions. be done piecemeal and patched together by anal-End-to-end testing activities include the following: ysis and simulation. An example of this would be a1. Operational scenarios are created that span all of the system that is assembled on orbit where the various following items (during nominal, off-nominal, and elements come together for the first time on orbit. stressful conditions) that could occur during the 8. When an error in the developed system is identified mission: and fixed, regression testing of the system or compo- NASA Systems Engineering Handbook  95
  • 110. 5.0 Product Realization nent is performed to ensure that modifications have not caused unintended effects and that the system Note: The development of the physical, mathemati- or component still complies with previously tested cal, or logical model includes evaluating whether the specified requirements. model to be used as representative of the system end product was realized according to its design–solu-9. When tests are aborted or a test is known to be flawed tion-specified requirements for a model and whether (e.g., due to configuration, test environment), the test it will be valid for use as a model. In some cases, the should be rerun after the identified problem is fixed. model must also be accredited to certify the range of10. The operational scenarios should be used to formu- specific uses for which the model can be used. Like late the final operations plan. any other enabling product, budget and time must be planned for creating and evaluating the model to11. Prior to system delivery, as part of the system qualifi- be used to verify the applicable system end product. cation testing, test cases should be executed to cover all of the plans documented in the operations plan in the order in which they are expected to occur during ments) for their development and realization (including the mission. acceptance by the operational community) to ensure that the model and simulation adequately represent the opera-End-to-end test documentation includes the following: tional environment and performance of the modeled end Inclusion of end-to-end testing plans as a part of the product. Additionally, in some cases certification is re- test or verification plan. quired before models and simulations can be used. A document, matrix, or database under configura- M&S assets can come from a variety of sources; for ex- tion control that traces the end-to-end system test ample, contractors, other Government agencies, or labo- suite to the results. Data that are typically recorded ratories can provide models that address specific system include the test-case identifier, subsystems/hardware/ attributes. program sets exercised, list of the requirements being verified, interfaces exercised, date, and outcome of 5.3.2.7 Hardware-in-the-Loop test (i.e., whether the test actual output met the ex- Fully functional end products, such as an actual piece of pected output). hardware, may be combined with models and simula- End-to-end test cases and procedures (including in- tions that simulate the inputs and outputs of other end puts and expected outputs). products of the system. This is referred to as “Hardware- A record of end-to-end problems/failures/anomalies. in-the-Loop” (HWIL) testing. HWIL testing links all el- ements (subsystems and test facilities) together withinEnd-to-end testing can be integrated with other project a synthetic environment to provide a high-fidelity, real-testing activities; however, the documentation men- time operational evaluation for the real system or sub-tioned in this subsection should be readily extractable systems. The operator can be intimately involved in thefor review, status, and assessment. testing, and HWIL resources can be connected to other facilities for distributed test and analysis applications.5.3.2.6 Modeling and Simulation One of the uses of HWIL testing is to get as close to theFor the Product Verification Process, a model is a phys- actual concept of operation as possible to support verifi-ical, mathematical, or logical representation of an end cation and validation when the operational environmentproduct to be verified. Modeling and Simulation (M&S) is difficult or expensive to recreate.can be used to augment and support the Product Verifi-cation Process and is an effective tool for performing the During development, this HWIL verification normallyverification whether in early life-cycle phases or later. Both takes place in an integration laboratory or test facility. Forthe facilities and the model itself are developed using the example, HWIL could be a complete spacecraft in a spe-system design and product realization processes. cial test chamber, with the inputs/outputs being provided as output from models that simulate the system in an op-The model used, as well as the M&S facility, are enabling erational environment. Real-time computers are used toproducts and must use the 17 technical processes (see NPR control the spacecraft and subsystems in projected op-7123.1, NASA Systems Engineering Processes and Require- erational scenarios. Flight dynamics, responding to the96  NASA Systems Engineering Handbook
  • 111. 5.3 Product Verificationcommands issued by the guidance and control system Modeling, simulation, and hardware/human-in-the-hardware/software, are simulated in real-time to deter- loop technology, when appropriately integrated and se-mine the trajectory and to calculate system flight condi- quenced with testing, provide a verification method attions. HWIL testing verifies that the end product being a reasonable cost. This integrated testing process specif-evaluated meets the interface requirements, properly ically (1) reduces the cost of life-cycle testing, (2) pro-transforming inputs to required outputs. HWIL mod- vides significantly more engineering/performance in-eling can provide a valuable means of testing physical sights into each system evaluated, and (3) reduces testend products lower in the system structure by providing time and lowers project risk. This process also signifi-simulated inputs to the end product or receiving outputs cantly reduces the number of destructive tests requiredfrom the end product to evaluate the quality of those out- over the life of the product. The integration of M&Sputs. This tool can be used throughout the life cycle of a into verification testing provides insights into trendsprogram or project. The shuttle program uses an HWIL and tendencies of system and subsystem performanceto verify software and hardware updates for the control that might not otherwise be possible due to hardwareof the shuttle main engines. limitations. NASA Systems Engineering Handbook  97
  • 112. 5.0 Product Realization5.4 Product ValidationThe Product Validation Process is the second of the ver- realized product is in conformance with its specificationsification and validation processes conducted on a real- and design description documentation because theseized end product. While verification proves whether specifications and documents will establish the configura-“the system was done right,” validation proves whether tion baseline of the product, which may have to be mod-“the right system was done.” In other words, verifica- ified at a later time. Without a verified baseline and ap-tion provides objective evidence that every “shall” state- propriate configuration controls, such later modificationsment was met, whereas validation is performed for the could be costly or cause major performance problems.benefit of the customers and users to ensure that thesystem functions in the expected manner when placed When cost-effective and warranted by analysis, var-in the intended environment. This is achieved by ex- ious combined tests are used. The expense of validationamining the products of the system at every level of the testing alone can be mitigated by ensuring that each endstructure. product in the system structure was correctly realized in accordance with its specified requirements before con-Validation confirms that realized end products at any ducting validation.position within the system structure conform to theirset of stakeholder expectations captured in the ConOps, 5.4.1 Process Descriptionand ensures that any anomalies discovered during vali-dation are appropriately resolved prior to product de- Figure 5.4-1 provides a typical flow diagram for thelivery. This section discusses the process activities, Product Validation Process and identifies typical inputs,types of validation, inputs and outputs, and potential outputs, and activities to consider in addressing productdeficiencies. validation.Distinctions Between Product Verification and 5.4.1.1 InputsProduct Validation Key inputs to the process are:From a process perspective, Product Verification and  Verified product,Product Validation may be similar in nature, but the ob-  Validation plan,jectives are fundamentally different.  Baselined stakeholder expectations (including ConOpsFrom a customer point of view, the interest is in whether and mission needs and goals), andthe end product provided will do what they intend within  Any enabling products needed to perform the Productthe environment of use. It is essential to confirm that the Validation Process. Differences Between Verification and Validation Testing  Verification Testing: Verification testing relates back to the approved requirements set (such as an SRD) and can be performed at different stages in the product life cycle. Verification testing includes: (1) any testing used to assist in the development and maturation of products, product elements, or manufacturing or support processes; and/or (2) any engineering-type test used to verify status of technical progress, to verify that design risks are minimized, to substan- tiate achievement of contract technical performance, and to certify readiness for initial validation testing. Verification tests use instrumentation and measurements, and are generally accomplished by engineers, technicians, or operator- maintainer test personnel in a controlled environment to facilitate failure analysis.  Validation Testing: Validation relates back to the ConOps document. Validation testing is conducted under realistic conditions (or simulated conditions) on any end product for the purpose of determining the effectiveness and suit- ability of the product for use in mission operations by typical users; and the evaluation of the results of such tests. Test- ing is the detailed quantifying method of both verification and validation. However, testing is required to validate fi- nal end products to be produced and deployed.98  NASA Systems Engineering Handbook
  • 113. 5.4 Product Validation From Product Veri cation Process To Product Transition Process End Product Prepare to conduct product to Be Validated validation Validated End Product From Con guration Management Process To Technical Perform the product Assessment Process Stakeholder Expectation validation Baseline Product Validation Results From Design Solution Analyze the outcomes of De nition and Technical the product validation To Technical Data Planning Processes Management Process Product Validation Plan Prepare a product Product Validation validation report Report From existing resources or Product Transition Process Product Validation Capture the work products Work Products from product validation Product Validation– Enabling Products Figure 5.4‑1 Product Validation Process5.4.1.2 Process Activities ▶ Validation is performed for each realized (imple-The Product Validation Process demonstrates that the mented or integrated) product from the lowest endrealized end product satisfies its stakeholder (customer product in a system structure branch up to the topand other interested party) expectations within the in- WBS model end product.tended operational environments, with validation per- ▶ Evidence is generated as necessary to confirm thatformed by anticipated operators and/or users. The type products at each layer of the system structure meetof validation is a function of the life-cycle phase and the the capability and other operational expectationsposition of the end product within the system structure. of the customer/user/operator and other interested parties.There are five major steps in the validation process:  To ensure that any problems discovered are appropri-(1) validation planning (prepare to implement the val- ately resolved prior to delivery of the realized productidation plan), (2) validation preparation (prepare for (if validation is done by the supplier of the product) orconducting validation), (3) conduct planned validation prior to integration with other products into a higher(perform validation), (4) analyze validation results, and level assembled product (if validation is done by the(5) capture the validation work products. receiver of the product).The objectives of the Product Validation Process are: Verification and validation events are illustrated as sepa- To confirm that rate processes, but when used, can considerably overlap. ▶ The right product was realized—the one wanted by When cost effective and warranted by analysis, various the customer, combined tests are used. However, while from a process ▶ The realized product can be used by intended op- perspective verification and validation are similar in na- erators/users, and ture, their objectives are fundamentally different. ▶ The Measures of Effectiveness (MOEs) are satisfied. From a customer’s point of view, the interest is in whether To confirm that the realized product fulfills its intended the end product provided will supply the needed capa- use when operated in its intended environment: bilities within the intended environments of use. The NASA Systems Engineering Handbook  99
  • 114. 5.0 Product Realizationexpense of validation testing alone can be mitigated by Validation Plan and Methodsensuring that each end product in the system structure The validation plan is one of the work products of thewas correctly realized in accordance with its specified re- Technical Planning Process and is generated during thequirements prior to validation, during verification. It is Design Solution Process to validate the realized productpossible that the system design was not done properly against the baselined stakeholder expectations. This planand, even though the verification tests were successful can take many forms. The plan describes the total Test(satisfying the specified requirements), the validation and Evaluation (T&E) planning from development oftests would still fail (stakeholder expectations not satis- lower end through higher end products in the systemfied). Thus, it is essential that validation of lower prod- structure and through operational T&E into productionucts in the system structure be conducted as well as veri- and acceptance. It may include the verification and val-fication so as to catch design failures or deficiencies as idation plan. (See Appendix I for a sample verificationearly as possible. and validation plan outline.)Product Validation Planning The types of validation include test, demonstration, in-Planning to conduct the product validation is a key first spection, and analysis. While the name of each methodstep. The type of validation to be used (e.g., analysis,demonstration, inspection, or test) should be establishedbased on the form of the realized end product, the appli- Types of Validationcable life-cycle phase, cost, schedule, resources available,and location of the system product within the system  Analysis: The use of mathematical modeling and analytical techniques to predict the suitability of astructure. (See Appendix I for a sample verification and design to stakeholder expectations based on cal-validation plan outline.) culated data or data derived from lower systemAn established set or subset of requirements to be val- structure end product validations. It is generallyidated should be identified and the validation plan re- used when a prototype; engineering model; or fab- ricated, assembled, and integrated product is notviewed (an output of the Technical Planning Process, available. Analysis includes the use of both model-based on design solution outputs) for any specific pro- ing and simulation.cedures, constraints, success criteria, or other validation  Demonstration: The use of a realized end productrequirements. The conditions and environment under to show that a set of stakeholder expectations canwhich the product is to be validated should be estab- be achieved. It is generally used for a basic confir-lished and the validation planned based on the relevant mation of performance capability and is differenti-life-cycle phase and associated success criteria identified. ated from testing by the lack of detailed data gath-The Decision Analysis Process should be used to help fi- ering. Validation is done under realistic conditionsnalize the planning details. for any end product within the system structure for the purpose of determining the effectiveness andIt is important to review the validation plans with rel- suitability of the product for use in NASA missionsevant stakeholders and understand the relationship be- or mission support by typical users and evaluatingtween the context of the validation and the context of use the results of such tests.(human involvement). As part of the planning process,  Inspection: The visual examination of a realizedvalidation-enabling products should be identified, and end product. It is generally used to validate phys-scheduling and/or acquisition initiated. ical design features or specific manufacturer iden- tification.Procedures should be prepared to conduct validationbased on the type (e.g., analysis, inspection, demon-  Test: The use of a realized end product to obtain detailed data to validate performance or to pro-stration, or test) planned. These procedures are typi- vide sufficient information to validate performancecally developed during the design phase of the project through further analysis. Testing is the detailedlife cycle and matured as the design is matured. Op- quantifying method of both verification and valida-erational and use-case scenarios are thought through tion but it is required in order to validate final endso as to explore all possible validation activities to be products to be produced and deployed.performed.100  NASA Systems Engineering Handbook
  • 115. 5.4 Product Validationis the same as the name of the methods for verification,the purpose and intent are quite different. Note: The final, official validation of the end product should be for a controlled unit. Typically, attempt-Validation is conducted by the user/operator or by the ing final validation against operational concepts ondeveloper, as determined by NASA Center directives or a prototype is not acceptable: it is usually completedthe contract with the developers. Systems-level valida- on a qualification, flight, or other more final, con-tion (e.g., customer T&E and some other types of valida- trolled unit.tion) may be performed by an acquirer testing organiza-tion. For those portions of validation performed by the Outcomes of validation planning include the following:developer, appropriate agreements must be negotiated toensure that validation proof-of-documentation is deliv-  The validation type that is appropriate to confirm thatered with the realized product. the realized product or products conform to stake- holder expectations (based on the form of the real-All realized end products, regardless of the source (buy, ized end product) has been identified.make, reuse, assemble and integrate) and the position  Validation procedures are defined based on: (1) thein the system structure, should be validated to demon- needed procedures for each type of validation se-strate/confirm satisfaction of stakeholder expectations. lected, (2) the purpose and objective of each proce-Variations, anomalies, and out-of-compliance condi- dure step, (3) any pre-test and post-test actions, andtions, where such have been detected, are documented (4) the criteria for determining the success or failurealong with the actions taken to resolve the discrepancies. of the procedure.Validation is typically carried out in the intended oper-  A validation environment (e.g., facilities, equipment,ational environment under simulated or actual opera- tools, simulations, measuring devices, personnel, andtional conditions, not under the controlled conditions operational conditions) in which the validation pro-usually employed for the Product Verification Process. cedures will be implemented has been defined.Validation can be performed recursively throughout theproject life cycle and on a wide variety of product forms.For example: Note: In planning for validation, consideration should Simulated (algorithmic models, virtual reality simu- be given to the extent to which validation testing will lator); be done. In many instances, off-nominal operational scenarios and nominal operational scenarios should Mockup (plywood, brassboard, breadboard); be utilized. Off-nominal testing offers insight into a Concept description (paper report); system’s total performance characteristics and often Prototype (product with partial functionality); assists in identification of design issues and human- machine interface, training, and procedural changes Engineering unit (fully functional but may not be required to meet the mission goals and objectives. same form/fit); Off-nominal testing, as well as nominal testing, should Design validation test units (form, fit and function be included when planning for validation. may be the same, but they may not have flight parts); Qualification unit (identical to flight unit but may be subjected to extreme environments); or Product Validation Preparation Flight unit (end product that is flown). To prepare for performing product validation, the ap-Any of these types of product forms may be in any of propriate set of expectations against which the valida-these states: tion is to be made should be obtained. Also, the product to be validated (output from implementation, or integra- Produced (built, fabricated, manufactured, or coded); tion and verification), as well as the validation-enabling Reused (modified internal nondevelopmental prod- products and support resources (requirements identi- ucts or off-the-shelf product); or fied and acquisition initiated by design solution activi- Assembled and integrated (a composite of lower level ties) with which validation will be conducted, should be products). collected. NASA Systems Engineering Handbook  101
  • 116. 5.0 Product Realization the procedures were followed and performed as planned, Examples of Enabling Products and Support the validation-enabling products were calibrated cor- Resources for Preparing to Conduct rectly, and the data were collected and recorded for re- Validation quired validation measures. One of the key tasks in the Product Validation Process When poor validation conduct, design, or conditions “prepare for conducting validation” is to obtain neces- sary enabling products and support resources needed cause anomalies, the validation should be replanned as to conduct validation. Examples of these include: necessary, the environment preparation anomalies cor- rected, and the validation conducted again with im-  Measurement tools (scopes, electronic devices, proved or correct procedures and resources. The Deci- probes); sion Analysis Process should be used to make decisions  Embedded test software; for issues identified that may require alternative choices  Test wiring, measurement devices, and telemetry to be evaluated and a selection made or when needed equipment; changes to the validation plans, environment, and/or  Recording equipment (to capture test results); conduct are required.  End products in the loop (software, electronics, or mechanics) for hardware-in-the-loop simulations; Outcomes of conducting validation include the following:  External interfacing products of other systems;  A validated product is established with supporting  Actual external interfacing products of other sys- confirmation that the appropriate results were col- tems (aircraft, vehicles, humans); and lected and evaluated to show completion of validation objectives.  Facilities and skilled operators.  A determination is made as to whether the fabricated/ manufactured or assembled and integrated productsThe validation environment is then prepared (set up the (including software or firmware builds, as applicable)equipments, sensors, recording devices, etc., that will be comply with their respective stakeholder expecta-involved in the validation conduct) and the validation tions.procedures reviewed to identify and resolve any issues  A determination is made that the validated productimpacting validation. was appropriately integrated with the validation en- vironment and the selected stakeholder expectationsOutcomes of validation preparation include the following: set was properly validated. Preparation for doing the planned validation is com-  A determination is made that the product being vali- pleted; dated functions together with interfacing products Appropriate set of stakeholder expectations are avail- throughout their performance envelopes. able and on hand; Articles or models to be used for validation with the Analyze Product Validation Results validation product and enabling products are inte- Once the validation activities have been completed, the grated within the validation environment according results are collected and the data are analyzed to confirm to plans and schedules; that the end product provided will supply the customer’s Resources are available according to validation plans needed capabilities within the intended environments of and schedules; and use, validation procedures were followed, and enabling The validation environment is evaluated for adequacy, products and supporting resources functioned correctly. completeness, readiness, and integration. The data are also analyzed for quality, integrity, correct- ness, consistency, and validity and any unsuitable prod-Conduct Planned Product Validation ucts or product attributes are identified and reported.The act of validating the end product is conducted as It is important to compare the actual validation results tospelled out in the validation plans and procedures and the expected results and to conduct any required systemconformance established to each specified validation re- design and product realization process activities to re-quirement. The responsible engineer should ensure that solve deficiencies. The deficiencies, along with recom-102  NASA Systems Engineering Handbook
  • 117. 5.4 Product Validationmended corrective actions and resolution results, should a set of stakeholder expectations. A second reason couldbe recorded and validation repeated, as required. be a shortfall in the verification process of the end prod- uct. This could create the need for:Outcomes of analyzing validation results include the fol-  Reengineering end products lower in the system struc-lowing: ture that make up the end product that was found to Product deficiencies and/or issues are identified. be deficient (which failed to satisfy validation require- Assurances that appropriate replanning, redefinition ments) and/or of requirements, design, and revalidation have been  Reperforming any needed verification and validation accomplished for resolution of anomalies, variations, processes. or out-of-compliance conditions (for problems not caused by poor validation conduct). Other reasons for validation deficiencies (particularly Discrepancy and corrective action reports are gener- when M&S are involved) may be incorrect and/or inap- ated as needed. propriate initial or boundary conditions; poor formula- tion of the modeled equations or behaviors; the impact of The validation report is completed. approximations within the modeled equations or behav- iors; failure to provide the required geometric and physicsValidation Notes fidelities needed for credible simulations for the intendedThe types of validation used are dependent on the life- purpose; referent for comparison of poor or unknown un-cycle phase; the product’s location in the system struc- certainty quantification quality; and/or poor spatial, tem-ture; and cost, schedule, and resources available. Valida- poral, and perhaps, statistical resolution of physical phe-tion of products within a single system model may be nomena used in M&S.conducted together (e.g., an end product with its relevantenabling products, such as operational (control center or aradar with its display), maintenance (required tools workwith product), or logistical (launcher or transporter). Note: Care should be exercised to ensure that the cor- rective actions identified to remove validation de-Each realized product of system structure should be vali- ficiencies do not conflict with the baselined stake-dated against stakeholder expectations before being inte- holder expectations without first coordinating suchgrated into a higher level product. changes with the appropriate stakeholders.ReengineeringBased on the results of the Product Validation Process, Capture Product Validation Work Productsit could become necessary to reengineer a deficient end Validation work products (inputs to the Technical Dataproduct. Care should be taken that correcting a deficiency, Management Process) take many forms and involveor set of deficiencies, does not generate a new issue with many sources of information. The capture and recordinga part or performance that had previously operated sat- of validation-related data is a very important, but often un-isfactorily. Regression testing, a formal process of rerun- deremphasized, step in the Product Validation Process.ning previously used acceptance tests primarily used for Validation results, deficiencies identified, and correctivesoftware, is one method to ensure a change did not affect actions taken should be captured, as should all relevantfunction or performance that was previously accepted. results from the application of the Product Validation Process (related decisions, rationale for decisions made,Validation Deficiencies assumptions, and lessons learned).Validation outcomes can be unsatisfactory for severalreasons. One reason is poor conduct of the validation Outcomes of capturing validation work products include(e.g., enabling products and supporting resources miss- the following:ing or not functioning correctly, untrained operators,  Work products and related information generatedprocedures not followed, equipment not calibrated, or while doing Product Validation Process activities andimproper validation environmental conditions) and fail- tasks are recorded; i.e., type of validation conducted,ure to control other variables not involved in validating the form of the end product used for validation, val- NASA Systems Engineering Handbook  103
  • 118. 5.0 Product Realization idation procedures used, validation environments, 5.4.2 Product Validation Guidance outcomes, decisions, assumptions, corrective actions, The following is some generic guidance for the Product lessons learned, etc. (often captured in a matrix or Validation Process. other tool—see Appendix E). Deficiencies (e.g., variations and anomalies and 5.4.2.1 Modeling and Simulation out-of-compliance conditions) are identified and As stressed in the verification process material, M&S is documented, including the actions taken to re- also an important validation tool. M&S usage consider- solve. ations involve the verification, validation, and certifica- Proof is provided that the realized product is in con- tion of the models and simulations. formance with the stakeholder expectation set used in the validation. Validation report including: Model Verification and Validation ▶ Recorded validation results/data;  Model Verification: Degree to which a model ac- ▶ Version of the set of stakeholder expectations used; curately meets its specifications. Answers “Is it what I intended?” ▶ Version and form of the end product validated;  Model Validation: The process of determining the ▶ Version or standard for tools and equipment used, degree to which a model is an accurate representa- together with applicable calibration data; tion of the real world from the perspective of the in- ▶ Outcome of each validation including pass or fail tended uses of the model. declarations; and  Model Certification: Certification for use for a specific ▶ Discrepancy between expected and actual results. purpose. Answers, “Should I endorse this model?” Note: For systems where only a single deliverable item is developed, the Product Validation Process normally 5.4.2.2 Software completes acceptance testing of the system. How- Software verification is a software engineering activity ever, for systems with several production units, it is that demonstrates the software products meet specified important to understand that continuing verification and validation is not an appropriate approach to use requirements. Methods of software verification include for the items following the first deliverable. Instead, peer reviews/inspections of software engineering prod- acceptance testing is the preferred means to ensure ucts for discovery of defects, software verification of re- that subsequent deliverables comply with the base- quirements by use of simulations, black box and white lined design. box testing techniques, analyses of requirement imple- mentation, and software product demonstrations. Software validation is a software engineering activity5.4.1.3 Outputs that demonstrates the as-built software product or soft-Key outputs of validation are: ware product component satisfies its intended use in its Validated product, intended environment. Methods of software validation Discrepancy reports and identified corrective actions, include: peer reviews/inspections of software product and component behavior in a simulated environment, ac- ceptance testing against mathematical models, analyses, Validation reports. and operational environment demonstrations. The proj-Success criteria for this process include: (1) objective ev- ect’s approach for software verification and validation isidence of performance and the results of each system- documented in the software development plan. Specificof-interest validation activity are documented, and (2) Agency-level requirements for software verification andthe validation process should not be considered or des- validation, peer reviews (see Appendix N), testing andignated as complete until all issues and actions are re- reporting are contained in NPR 7150.2, NASA Softwaresolved. Requirements.104  NASA Systems Engineering Handbook
  • 119. 5.4 Product ValidationThe rigor and techniques used to verify and validate In some instances, NASA management may selectsoftware depend upon software classifications (which a project for additional independent software veri-are different from project and payload classifications). A fication and validation by the NASA Software In-complex project will typically contain multiple systems dependent Verification and Validation (IV&V) Fa-and subsystems having different software classifications. cility in Fairmount, West Virginia. In this case aIt is important for the project to classify its software and Memorandum of Understanding (MOU) and sepa-plan verification and validation approaches that appro- rate software IV&V plan will be created and imple-priately address the risks associated with each class. mented. NASA Systems Engineering Handbook  105
  • 120. 5.0 Product Realization5.5 Product TransitionThe Product Transition Process is used to transition a end products; preparing user sites; training operatorsverified and validated end product that has been gener- and maintenance personnel; and installing and sus-ated by product implementation or product integration taining, as applicable. Examples are transitioning theto the customer at the next level in the system structure external tank, solid rocket boosters, and orbiter to Ken-for integration into an end product or, for the top-level nedy Space Center (KSC) for integration and flight.end product, transitioned to the intended end user. Theform of the product transitioned will be a function of the 5.5.1 Process Descriptionproduct-line life-cycle phase success criteria and the lo- Figure 5.5-1 provides a typical flow diagram for thecation within the system structure of the WBS model in Product Transition Process and identifies typical inputs,which the end product exits. outputs, and activities to consider in addressing product transition.Product transition occurs during all phases of the lifecycle. During the early phases, the technical team’s prod- 5.5.1.1 Inputsucts are documents, models, studies, and reports. As theproject moves through the life cycle, these paper or soft Inputs to the Product Transition Process depend primari-products are transformed through implementation and ly on the transition requirements, the product that is beingintegration processes into hardware and software solu- transitioned, the form of the product transition that is tak-tions to meet the stakeholder expectations. They are re- ing place, and where the product is transitioning to. Typi-peated with different degrees of rigor throughout the life cal inputs are shown in Figure 5.5-1 and described below.cycle. The Product Transition Process includes product  The End Product or Products To Be Transitionedtransitions from one level of the system architecture up- (from Product Validation Process): The product toward. The Product Transition Process is the last of the be transitioned can take several forms. It can be a sub-product realization pro-cesses, and it is a bridgefrom one level of the system Prepare to conduct product To end user or Productto the next higher level. transition From Product Integration Process Validation Process (recursive loop)The Product Transition Pro-cess is the key to bridge from End Product to Be Delivered End Product Evaluate the end product, personnel, Transitioned With Applicableone activity, subsystem, or and enabling product readiness for Documentation product transitionelement to the overall engi-neered system. As the system From Technical Data To Technical Datadevelopment nears comple- Prepare the end product for transition Management Process Management Processtion, the Product Transition Documentation toProcess is again applied for Accompany the Transition the end product to the Product Transition Work Productsthe end product, but with Delivered End Product customer with required documentationmuch more rigor since now based on the type of transition requiredthe transition objective is To Product From existing Implementation,delivery of the system-level resources or Product Prepare sites, as required, where the Integration, Veri cation,end product to the actual Transition Process Validation, and end product will be stored, assembled,end user. Depending on the Product Transition– integrated, installed, used, and/or Transition Processeskind or category of system Enabling Products maintained Realized Enablingdeveloped, this may involve Productsa Center or the Agency and Capture product implementationimpact thousands of indi- work productsviduals storing, handling,and transporting multiple Figure 5.5‑1 Product Transition Process106  NASA Systems Engineering Handbook
  • 121. 5.5 Product Transition system component, system assembly, or top-level end Special consideration must be given to safety, in- product. It can be hardware or software. It can be newly cluding clearly identifiable tags and markings that built, purchased, or reused. A product can transition identify the use of hazardous materials, special han- from a lower system product to a higher one by being dling instructions, and storage requirements. integrated with other transitioned products. This pro-  Product-Transition-Enabling Products, Including cess may be repeated until the final end product is Packaging Materials; Containers; Handling Equip- achieved. Each succeeding transition requires unique ment; and Storage, Receiving, and Shipping Facili- input considerations when preparing for the validated ties (from Existing Resources or Product Transition product for transition to the next level. Process for Enabling Product Realization): Product- Early phase products can take the form of informa- transition-enabling products may be required to fa- tion or data generated from basic or applied research cilitate the implementation, integration, evaluation, using analytical or physical models and often are in transition, training, operations, support, and/or retire- paper or electronic form. In fact, the end product for ment of the transition product at its next higher level many NASA research projects or science activities is a or for the transition of the final end product. Some or report, paper, or even an oral presen