NASA Engineering Handbook
Upcoming SlideShare
Loading in...5
×
 

NASA Engineering Handbook

on

  • 3,086 views

 

Statistics

Views

Total Views
3,086
Views on SlideShare
2,962
Embed Views
124

Actions

Likes
3
Downloads
101
Comments
0

9 Embeds 124

http://ingesaerospace.blogspot.com 107
http://ingesaerospace.blogspot.com.es 4
http://ingesaerospace.blogspot.mx 3
http://www.ingesaerospace.blogspot.com 3
http://ingesaerospace.blogspot.com.ar 2
http://ingesaerospace.blogspot.in 2
http://webcache.googleusercontent.com 1
http://www.ingesaerospace.blogspot.mx 1
http://ingesaerospace.blogspot.jp 1
More...

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

NASA Engineering Handbook NASA Engineering Handbook Document Transcript

  • NASASystemsEngineeringHandbookSP-610SJune 1995
  • NASA Systems Engineering Handbook Page iiForeword from a systems perspective, starting at mission needs and conceptual studies through operations and disposal. In an attempt to demonstrate the potential While it would be impossible to thank all of thedangers of relying on purely cookbook" logical people directly involved, it is essential to note the efforts ofthinking, the mathematician/philosopher Carl Hempel Dr. Robert Shishko of the Jet Propulsion Laboratory. Bobposed a paradox. If we want to prove the hypothesis was largely responsible for ensuring the completion of this“AII ravens are black," we can look for many ravens effort. His technical expertise and nonstop determinationand determine if they all meet our criteria. Hempel were critical factors to ensure the success of this project.suggested changing the hy –pothesis to its logicalcontrapositive (a rewording with identical meaning) Mihaly Csikzenthmihali defined an optimalwould be easier. The new hypothesis becomes: "All experience as one where there is "a sense of exhilaration, a deep sense of enjoyment that is long cherished andnonblack things are nonravens." This transformation, becomes a landmark in memory of what life should be like." Isupported by the laws of logical thinking, makes it am not quite sure if the experience which produced this handmuch easier to test, but unfortunately is ridiculous. –book can be described exactly this way, yet the sentimentHempels raven paradox points out the importance of seems reasonably close.common sense and proper background exploration,even to subjects as intricate as systems engineering. —Dr. Edward J. Hoffman Program Manager, NASA HeadquartersIn 1989, when the initial work on the NASA Systems Spring 199Engineering Handbook was started, there were manywho were concerned about the dangers of adocument that purported to teach a generic NASAapproach to systems engineering. Like Hempelsraven, there were concerns over the potential ofproducing a "cookbook" which of fered the illusion oflogic while ignoring experience and common sense.From the tremendous response to the initial(September 1992) draft of the handbook (in terms ofboth requests for copies and praise for the product), itseems early concerns were largely unfounded andthat there is a strong need for this handbook. The document you are holding represents whatwas deemed best in the original draft and updatesinformation necessary in light of recommendations andchanges within NASA. This handbook represents some ofthe best thinking from across NASA. Many expertsinfluenced its outcome, and consideration was given to eachidea and criticism. It truly represents a NASA-wide productand one which furnishes a good overview of NASA systemsengineering. The handbook is intended to be an educationalguide written from a NASA perspective. Individuals who takesystems engineering courses are the primary audience forthis work. Working professionals who require a guidebook toNASA systems engineering represent a secondaryaudience. It was discovered during the review of the draftdocument that interest in this work goes far beyond NASA.Requests for translating this work have come frominternational sources, and we have been told that the drafthand book is being used in university courses on the subject.All of this may help explain why copies of the original drafthandbook have been in short supply. The main purposes of the NASA SystemsEngineering Handbook are to provide: 1) useful informationto system engineers and project managers, 2) a genericdescription of NASA systems engineering which can besupported bycenter-specific documents, 3) a common language andperspective of the systems engineering process, and 4) areference work which is consistent with NMI 7120.4/NHB7120.5. The handbook approaches systems engineering
  • NASA Systems Engineering Handbook Page iiiForeword to the September 1992 Draft As such, this present document is to be considered a next-to-final draft. Your comments, When NASA began to sponsor agency – corrections and suggestions are welcomed, valuedwide classes in systems engineering, it was to a and appreciated. Please send your remarks directly todoubting audience. Top management was quick to Robert Shishko, NASA Systemsexpress concern. As a former Deputy Administrator Engineering Working Group. NASA/Jet Propulsionstated "How can you teach an agency-wide systems Laboratory, California Institute of Technology,engineering class when we cannot even agree on 4800 Oak Grove Drive, Pasadena, CAhow to define it?" Good question, and one I must 91109-8099.admit caused us considerable concern at that time. —Francis T. HobanThe same doubt continued up until the publication of Program Manager, NASA Headquartersthis handbook. The initial systems engineering educationconference was held in January 1989 at the JohnsonSpace Center. A number of representatives fromother Centers at tended this meeting and it wasdecided then that we needed to form a working groupto support the development of appropriate andtailored systems engineering courses. At this meetingthe representatives from Marshal1 Space FlightCenter (MSFC) expressed a strong desire to document their own historic systems engineering processbefore any more of the key players left the Center.Other Centers also expressed a desire, if not asurgent as MSFC, to document their processes. It was thought that the best way to reflect thetotality of the NASA systems engineering process andto aid in developing the needed training was toprepare a top level (Level 0) document that wouldcontain a broad definition of systems engineering, abroad process outline, and typi cal tools andprocedures. In general, we wanted a top leveloverview of NASA systems engineering. To thisdocument would be appended each Centers uniquesystems engineering manual. The group was wellaware of the diversity each Center may have, butagreed that this approach would be quite acceptable. The next step and the most difficult in thisarduous process was to find someone to head thisyet-to-be-formed working group. Fortunately forNASA, Donna [Pivirotto] Shirley of the Jet PropulsionLaboratory stepped up to the challenge. Today,through her efforts, those of the working group, andthe skilled and dedicated authors, we have a uniqueand possibly a historic document. During the development of the manual wedecided to put in much more than may be appropriatefor a Level 0 document with the idea that we couldalways refine the document later. It was moreimportant to capture the knowledge when we could inorder to better position our selves for laterdissemination. If there is any criticism, it may be thelevel of detail contained in the manual, but this detailis necessary for young engineers. The presentdocument does appear to serve as a goodinstructional guide, although it does go well beyond itsoriginal intent.
  • NASA Systems Engineering Handbook Page 5Preface JSC-49040. The SEPIT project life cycle is intentionally consistent with that in NMI 7120.4/NHB This handbook was written to bring the 7120.5 (Management of Major System Programs andfundamental concepts and techniques of systems Projects), but provides more detail on its systemsengineering to NASA personnel in a way that engineering aspects.recognizes the nature of NASA systems and theNASA environment. The authors readily acknowledge This handbook consists of five corethat this goal will not be easily realized. One reason is chapters: (1) systems engineerings intellectualthat not everyone agrees on what systems process, (2) the NASA project life cycle, (3)engineering is, nor on how to do it. There are management issues in systems engineering, (4)legitimate differences of opinion on basic definitions, systems analysis and modeling issues, and (5)content, and techniques. Systems engineering itself is engineering specialty integration. These corea broad subject, with many different aspects. This chapters are supplemented by appendices, which caninitial handbook does not (and cannot) cover all of be expanded to accommodate any number ofthem. templates and examples to illustrate topics in the core chapters. The handbook makes extensive use of The content and style of this handbook show sidebars to define, refine, illustrate, and extenda teaching orientation. This handbook was meant to concepts in the core chapters without diverting theaccompany formal NASA training courses on systems reader from the main argument. There are noengineering, not to be a stand-alone, comprehensive footnotes; sidebars are used instead. The structure ofview of NASA systems engineering. Systems the handbook also allows for additional sections andengineering, in the authors opinions, cannot be chapters to be added at a later date.learned simply by starting at a well-defined beginning Finally, the handbook should be considered only aand proceeding seamlessly from one topic to another. starting point. Both NASA as a systems engineeringRather, it is a field that draws from many engineering organization, and systems engineering as a discipline,disciplines and other intellectual domains. The are undergoing rapid evolution. Over the next fiveboundaries are not always clear, and there are many years, many changes will no doubt occur, and someinteresting intellectual offshoots. Consequently, this are already in progress. NASA, for instance, ishandbook was designed to be a top-level overview of moving toward implementation of the standards in thesystems engineering as a discipline; brevity of International Standards Organization (ISO) 9000exposition and the provision of pointers to other books family, which will affect many aspects of systemsand documents for details were considered important engineering. In systems engineering as a discipline,guidelines. efforts are underway to merge existing systems engineering standards into a common American The material for this handbook was drawn National Standard on the Engineering of Systems,from many different sources, including field center and then ultimately into an international standard.systems engineering handbooks, NASA management These factors should be kept in mind when using thisinstructions (NMls) and NASA handbooks (NHBs), handbookfield center briefings on systems engineeringprocesses, non-NASA systems engineering textbooksand guides, and three independent systemsengineering courses taught to NASA audiences. Thehandbook uses this material to provide only top-levelinformation and suggestions for good systemsengineering practices; it is not intended in any way tobe a directive. By design, the handbook covers some topicsthat are also taught in Project Management/ProgramControl (PM/PC) courses, reflecting the unavoidableconnectednessof these three domains. The material on the NASAproject life cycle is drawn from the work of the NASA-wide Systems Engineering Working Group (SEWG),which met periodically in 1991 and 1992, and itssuccessor, the Systems Engineering ProcessImprovement Task (SEPIT) team, which met in 1993And 1994. This handbooks project life cycle isidentical to that promulgated in the SEPIT report,NASA SystemsEngineering Process for Programs and Projects,
  • NASA Systems Engineering Handbook Page 6Acknowledgements Mr. Don Woodruff, NASA/Marshall Space FlightThis work was conducted under the overall direction Centerof Mr. Frank Hoban, and then Dr. Ed Hoffman, NASAHeadquarters/Code FT, under the NASA Pro- Other Sources:gram/Project Management Initiative (PPMI). Dr. Mr. Dave Austin, NASA Headquarters/Code DSSShahid Habib, NASA Headquarters/Code QW, Mr. Phillip R. Barela, NASA/Jet Propulsion Laboratoryprovided additional financial support to the NASA- Mr. J.W. Bott, NASA/Jet Propulsion Laboratorywide Systems Engineering Working Group and Dr. Steven L. Cornford, NASA/Jet PropulsionSystems Engineering Process Improvement Task Laboratoryteam. Many individuals, both in and out of NASA, Ms. Sandra Dawson, NASA/Jet Propulsion Laboratorycontributed material, reviewed various drafts, or Dr. James W. Doane, NASA/Jet Propulsionotherwise provided valuable input to this handbook. LaboratoryUnfortunately, the lists below cannot include everyone Mr. William Edminston, NASA/Jet Propulsionwho shared their ideas and experience. Laboratory Mr. Charles C. Gonzales, NASA/Jet PropulsionPrincipal Contributors: LaboratoryDr. Robert Shishko, NASA/Jet Propulsion Laboratory Dr. Jairus Hihn, NASA/Jet Propulsion LaboratoryMr. Robert G. Chamberlain, P.E., NASA/Jet Dr. Ed Jorgenson, NASA/Jet Propulsion LaboratoryPropulsion Laboratory Mr. Richard V. Morris, NASA/Jet Propulsion LaboratoryOther Contributors: Mr. Tom Weber, Rockwell International/SpaceMr. Robert Aster, NASA/Jet Propulsion Laboratory SystemsMs. Beth Bain, Lockheed Missiles and SpaceCompany Reviewers:Mr. Guy Beutelschies, NASA/Jet Propulsion Mr. Robert C. Baumann, NASA/Goddard Space FlightLaboratory CenterDr. Vincent Bilardo, NASA/Ames Research Center Mr. Chris Carl, NASA/Jet Propulsion LaboratoryMs. Renee I. Cox, NASA/Marshall Space Flight Dr. David S.C. Chu, Assistant Secretary ofCenter Defense/Pro-gramDr. Kevin Forsberg, Center for Systems Management Analysis and EvaluationDr. Walter E. Hammond, Sverdrup Techology, Inc. Mr. M.J. Cork, NASA/Jet Propulsion LaboratoryMr. Patrick McDuffee, NASA/Marshall Space Flight Mr. James R. French, JRF Engineering ServicesCenter Mr. John L. Gasery, Jr., NASA/Stennis Space CenterMr. Harold Mooz, Center for Systems Management Mr. Frederick D. Gregory, NASA Headquarters/CodeMs. Mary Beth Murrill, NASA/Jet Propulsion QLaboratory Mr. Don Hedgepeth, NASA/Langley Research CenterDr. Les Pieniazek Lockheed Engineering and Mr. Jim Hines, Rockwell International/Space SystemsScienceS Mr. Keith L. Hudkins, NASA Headquarters/Code MZCo. Mr. Lou Polaski, Center for Systems Management Dr. Jerry Lake, Defense Systems ManagementMr. Neil Rainwater, NASA/Marshall Space Flight CollegeCenter Mr. T.J. Lee, NASA/Marshall Space Flight CenterMr. Tom Rowell, NASA/Marshall Space Flight Center Mr. Jim Lloyd, NASA Headquarters/Code QSMs. Donna Shirley, NASA/Jet Propulsion Laboratory Mr. Woody Lovelace, NASA/Langley ResearchMr. Mark Sluka, Lockheed Engineering and Sciences CenterCo. Mr. Ron Wade, Center for Systems Management Dr. Brian Mar, Department of Civil Engineering,SEPIT team members (not already mentioned): UniversityMr. Randy Fleming, Lockheed Missiles and Space of WashingtonCo. Dr. Ralph F. Miles, Jr., Center for Safety and SystemDr. Frank Fogle, NASA/Marshall Space Flight Center Management, University of Southern CaliforniaMr. Tony Fragomeni, NASA/Goddard Space Flight Dr. Richard A. Montgomery, Montgomery andCenter Dr. Shahid Habib, NASA Headquarters/Code AssociatesQW Mr. Bernard G. Morais, Synergistic Applications, Inc.Mr. Henning Krome, NASA/Marshall Space Flight Mr. Ron Moyer, NASA Headquarters/Code QWCenter Mr. Rudy Naranjo, NASA Headquarters/Code MZMr. Ray Lugo, NASA/Kennedy Space Center Mr. Raymond L. Nieder, NASA/Johnson SpaceMr. William C. Morgan, NASA/Johnson Space Center CenterDr. Mike Ryschkewitsch, NASA/Goddard Space Flight Mr. James E. Pearson, United TechnologiesCenter ResearchMr. Gerry Sadler, NASA/Lewis Research Center CenterMr. Dick Smart, Sverdrup Technology, Inc. Mr. Leo Perez, NASA Headquarters/Code QPDr. James Wade, NASA/Johnson Space Center Mr. David Pine, NASA Headquarters/Code BMr. Milam Walters, NASA/Langley Research Center
  • NASA Systems Engineering Handbook Page 7Mr. Glen D. Ritter, NASA/Marshall Space FlightCenterDr. Arnold Ruskin, NASA/Jet PropulsionLaboratory and University of California atLos AngelesMr. John G. Shirlaw, Massachusetts Institute ofTechnologyMr. Richard E. Snyder, NASAlLangley ResearchCenterMr. Don Sova, NASA Headquarters/Code AEMr. Marvin A. Stibich, USAF/Aeronautical SystemsCenterMr. Lanny Taliaferro, NASAtMarshall Space FlightCenterEditorial and graphics support:Mr. Stephen Brewster, NASA/Jet PropulsionLaboratoryMr. Randy Cassingham, NASA/Jet PropulsionLaboratoryMr. John Matlock, NASA/Jet Propulsion Laboratory—Dr. Robert ShishkoTask Manager, NASA/Jet Propulsion Laboratory
  • NASA Systems Engineering Handbook Page 11 Introduction The subject matter of systems engineering is1.1 Purpose very broad. The coverage in this handbook is limited to general concepts and generic descriptions of This handbook is intended to provide processes, tools, and techniques. It providesinformation on systems engineering that will be useful information on good systems engineering practices,to NASA sys-tem engineers, especially new ones. Its and pitfalls to avoid.primary objective is to provide a generic description of There are many textbooks that can be consulted forsystems engineering as it should be applied in-depth tutorials.throughout NASA. Field centers handbooks areencouraged to provide center-specific details of This handbook describes systemsimplementation. engineering as it should be applied to the development of major NASA systems. Systems For NASA system engineers to choose to engineering deals both with the system beingkeep a copy of this handbook at their elbows, it must developed (the product system) and the system thatprovide answers that cannot be easily found does the developing (the producing system).elsewhere. Conse-quently, it provides NASA-relevant Consequently, the handbooks scope properlyperspectives and NASA-particular data. NASA includes systems engineering functions regardless ofmanagement instructions (NMIs) are referenced when whether they are performed by an in-house systemsapplicable. engineering organization, a program/project office, or a system contractor. This handbooks secondary objective is toserve as a useful companion to all of the various While many of the producing systemscourses in systems engineering that are being offered design features may be implied by the nature of theunder NASAs auspices. tools and techniques of systems engineering, it does not follow that institutional procedures for their application must be uniform from one NASA field1.2 Scope and Depth center to another. Selected Systems Engineering Reading See the Bibliography for full reference data and further reading suggestions. Fundamentals of Systems Engineering Systems Engineering and Analysis (2nd ed.), B.S. Blanchard and W.J. Fabrycky Systems Engineering, Andrew P. Sage An Introduction to Systems Engineering, J.E. Armstrong and Andrew P. Sage Management Issues in Systems Engineering Systems Engineering, EIA/IS-632 IEEE Trial-Use Standard for application and Management of the Systems Engineering Process, IEEE Std 1220- 1994 Systems Engineering Management Guide, Defense Systems Management College System Engineering Management, B.S. Blanchard Systems Engineering Methods, Harold Chestnut Systems Concepts, Ralph Miles, Jr. (editor) Successful Systems Engineering for Engineers and Managers, Norman B. Reilly Systems Analysis and Modeling Systems Engineering Tools, Harold Chestnut Systems Analysis for Engineers and Managers, R. de Neufville and J.H. Stafford Cost Considerations in Systems Analysis, Gene H. Fisher Space Systems Design and Operations Space Vehicle Design, Michael D. Griffin and James R. French Space Mission Analysis and Design (2nd ed.), Wiley J. Larson and James R. Wertz (editors) D i fG h S ft B ij N A l
  • NASA Systems Engineering Handbook Page 22 Fundamentals of Systems Engineering The NASA management instruction for the acquisition of “major" systems (NMI 7120.4) defines a2.1 Systems, Supersystems, and program as “a related series of undertakings that continue over a period of time (normally years), whichSubsystems are designed to pursue, or are in support of, a focused scientific or technical goal, and which areA system is a set of interrelated components which characterized by: design, development, andinteract with one another in an organized fashion operations of systems." Programs are managed bytoward a common purpose. The components of a NASA Headquarters, and may encompass severalsystem may be quite diverse, consisting of persons, projects.organizations, procedures, software, equipment, end In the NASA context, a project encompassesor facilities. The purpose of a system may be as the design, development, and operation of one orhumble as distributing electrical power within a more sys-tems, and is generally managed by a NASAspacecraft or as grand as exploring the surface of field center.Mars. Headquarters management concerns include not only the engineering of the systems, but Every system exists in the context of a all of the other activities required to achieve thebroader supersystem, i.e., a collection of related desired end. These other activities include explainingsystems. It is in that context that the system must be the value of programs and projects to Congress and enlisting international cooperation. The term mission A Hierarchical System Terminology is often used for a program projects purpose; its connotations of fervor make it particu-larly suitable for The following hierarchical sequence of terms for such political activities, where the emo-tional content suc-cessively finer resolution was adopted by the of the term is a desirable factor. In everyday NASA –wide Systems Engineering Working conversation, the terms "project," "mission," and Group (SEWG) and its successor, the Systems "system" are often used interchangeably; while Engineering Process Im-provement Task (SEPIT) imprecise, this rarely causes difficulty. team: System The Technical Sophistication Required to do Segment Systems Engineering Depends on the Project Element Subsystem • The systems goals may be simple and easy to Assembly identify and measure—or they may be technically Subassembly complicated, requiring a great deal of insight Part about the environment or technology within or with which the system must operate. Particular projects may need a different sequence of layers— an instrument may not need as many • The system may have a single goal—or multiple layers, while a broad initiative may need to goals. There are techniques available for distinguish more layers. Projects should establish determining the relative values of multiple goalsjudged. Thus, managers in the supersystem set — but sometimes goals are truly incommensuratesystem policies, establish system objectives, and unquantifiable.determine system constraints, and define what costsare relevant. They often have oversight authority over • The system may have users representingsystem design and operations decisions. factions with conflicting objectives. When there are conflicting objectives, negotiated Most NASA systems are sufficiently complex compromises will be required.that their components are subsystems, which mustfunction in a coordinated way for the system to • Alternative system design concepts may beaccomplish its goals. From the point of view of abundant—or they may require creative genius tosystems engineering, each subsystem is a system in develop.its own right—that is, policies, requirements,objectives, and which costs are relevant are • A "back-of-the-envelope" computation may beestablished at the next level up in the hierarchy. satisfactory for prediction of how well theSpacecraft systems often have such subsystems as alternative design concepts would do inpropulsion, attitude control telecommunications, and achievement of the goals—or credibility maypower. In a large project, the subsystems are likely to depend upon construction and testing of hardwarebe called "systems". The word system is also used or software models.within NASA generically, as defined in the firstparagraph above. In this handbook, system" isgenerally used in its generic form.
  • NASA Systems Engineering Handbook Page 32.2 Definition of Systems Engineering system must provide the most effectiveness for the resources expended or, equivalently, it must be theSystems engineering is a robust approach to the least expensive for the effectiveness it provides. Thisdesign, creation, and operation of systems. In simple condition is a weak one because there are usuallyterms, the approach consists of identification and many designs that meet the condition. Think of eachquantification of system goals, creation of alternative possible design as a point in the tradeoff spacesystem design concepts, performance of design Costtrades, selection and implementation of the best The cost of a system is the foregone value of the Systems Engineering per ElA/IS-632 re-sources needed to design, build, and operate it. Be-cause resources come in many forms— work Systems engineering is "an interdisciplinary approach encompassing the entire technical effort to evolve and per-formed by NASA personnel and contractors, verify an integrated and life-cycle balanced set of sys- materials, energy, and the use of facilities and tem people, product, and process solutions that satisfy equipment such as wind tunnels, factories, offices, customer needs. Systems engineering encompasses (a) and computers—it is of en convenient to express the technical efforts related to the development, these values in common terms by using monetary manufacturing, verification, deployment, operations, units (such as dollars). support) disposal of, and user training for, system prod- ucts and processes; (b) the definition and management Effectiveness of the system configuration; (c) the translation of the system definition into work breakdown structures; and The effectiveness of a system is a quantitative (d) development of information for management deci- sion making." measure of the degree to which the systems purpose is achieved. Effectiveness measures are usually very dependent upon systemdesign, verification that the design is properly built performance. For example, launch vehicleand integrated, and post-implementation assessment effectiveness depends on the probability ofof how well the system meets (or met) the goals. The successfully injecting a payload onto a usableapproach is usually applied repeatedly and trajectory. The associated system performancerecursively, with several increases in the resolution of attributes include the mass that can be put into athe system baselines (which contain requirements, specified nominal orbit, the trade between injecteddesign details, verification procedures and standards, mass and launch velocity, and launch availability.cost and performance estimates, and so on). Cost-Effectiveness Systems engineering is performed in concertwith system management. A major part of the system The cost-effectiveness of a system combines bothengineers role is to provide information that the the cost and the effectiveness of the system in thesystem manager can use to make the right decisions. context of its objectives. While it may beThis includes identification of alternative design necessary to measure either or both of those inconcepts and characterization of those concepts inways that will help the system managers first discover between effectiveness and cost. A graph plotting thetheir preferences, then be able to apply them astutely. maximum achievable effectiveness of designsAn important aspect of this role is the creation of available with current technology as a function of costsystem models that facilitate assessment of the would in general yield a curved line such as the onealternatives in various dimensions such as cost, shown in Figure 1. (In the figure, all the dimensions ofperformance, and risk. effectiveness are represented by the ordinate and all the dimensions of cost by the abscissa.) In other Application of this approach includes words, the curved line represents the envelope of theperformance of some delegated management duties, currently available technology in terms of cost -such as maintaining control of the developing effectiveness.configuration and overseeing the integration of Points above the line cannot be achievedsubsystems. with currently available technology e that is, they do not represent feasible designs. (Some of those points2.3 Objective of Systems Engineering may be feasible in the future when further technological advances have been made.) Points inside the envelope are feasible, but are dominated The objective of systems engineering is to by designs whose combined cost and effectivenesssee to it that the system is designed, built, and lie on the envelope. Designs represented by points onoperated so that it accomplishes its purpose in the the envelope are called cost-effective (or efficient ormost cost-effective way possible, considering non-dominated) solutions.performance, cost, schedule, and risk. Design trade studies, an important part of the systems engineering process, often attempt toA cost-effective system must provide a particular kind find designs that provide a better combination of theof balance between effectiveness and cost: the
  • NASA Systems Engineering Handbook Page 4various dimensions of cost and effectiveness. When likely point, as is shown for design concept A in thethe starting point for a design trade study is inside the figure. Distributions resulting from designs which haveenvelope, there are alternatives that reduce costs little uncertainty are dense and highly compact, as iswithout decreasing any aspect of effectiveness. or shown for concept B. Distributions associated withincrease some aspect of effectiveness with risky designs may have significant probabilities of producing highly undesirable outcomes, as is suggested by the presence of an additional low effectiveness/high cost cloud for concept C. (Of course, the envelope of such clouds cannot be a sharp line such as is shown in the figures, but must itself be rather fuzzy. The line can now be thought of as representing the envelope at some fixed confidence level -- that is, a probability of x of achieving that effectiveness.) Both effectiveness and cost may require several descriptors. Even the Echo balloons obtained scientific data on the electromagnetic environment and atmospheric drag, in addition to their primary mission as communications satellites. Furthermore, Echo was the first satellite visible to the naked eye, an unquantified -- but not unrecognized —aspect of itsFigure 1 -- The Enveloping Surface of Non-dominated effectiveness. Costs, the expenditure of limitedDesigns. resources, may be measured in the several dimensions of funding, personnel, use of facilities,out decreasing others and without increasing costs. and so on. Schedule may appear as an attribute ofThen, the system managers or system engineers effectiveness or cost, or as a constraint. Sputnik, fordecision is easy. Other than in the sizing of example, drew much of its effectiveness from the factsubsystems, such "win-win" design trades are that it was a "first"; a mission to Mars that misses itsuncommon, but by no means rare. When the launch window has to wait about two years foralternatives in a design trade study, however, require another opportunity—a clear schedule constraint.trading cost for effectiveness, or even one dimension Risk results from uncertainties in realizedof effectiveness for another at the same cost, the effectiveness, costs, timeliness, and budgets.decisions become harder. Sometimes, the systems that provide the highest ratio of effectiveness to cost are the most desirable. However, this ratio is likely to be meaningless or—worse—misleading. To be useful and meaningful, that ratio must be uniquely determined and independent of the system cost. Further, there must be but a single measure of effectiveness and a single measure of cost. If the numerical values of those metrics are obscured by probability distributions, the ratios become uncertain as well; then any usefulness the simple, single ratio of two numbers might have had disappears. In some contexts, it is appropriate to seek the most effectiveness possible within a fixed budget; in other contexts, it is more appropriate to seek the least cost possible with specified effectiveness. In these cases, there is the question of what level ofFigure 2--Estimates of Outcomes to be Obtained from effectiveness to specify or of what level of costs to fix.Several Design Concepts Including Uncertainty. In practice, these may be mandated in the form of performance or cost requirements; it then becomes The process of finding the most cost- appropriate to ask whether a slight relaxation ofeffective design is further complicated by uncertainty, requirements could produce a significantly cheaperwhich is shown in Figure 2 as a modification of Figure sys-tem or whether a few more resources could1. Exactly what outcomes will be realized by a produce a significantly more effective system.particular system design cannot be known in advance Usually, the system manager must choosewith certainty, so the projected cost and effectiveness among designs that differ in terms of numerousof a design are better described by a probability attributes. A variety of methods have been developeddistribution than by a point. This distribution can be that can be used to help managers uncover theirthought of as a cloud which is thickest at the most preferences between attributes and to quantify theirlikely value and thinner farther away from the most
  • NASA Systems Engineering Handbook Page 5 The System Engineers Dilemma analytic methods. These specialty engineering areas typically include reliability, maintainability, logistics, At each cost-effective solution: test, production, transportation, human factors, quality • To reduce cost at constant risk, performance assurance, and safety engineering. Specialty must be reduced. engineers contribute throughout the systems • To reduce risk at constant cost, performance engineering process; part of the system engineers job must be reduced. is to see that these functions are coherently • To reduce cost at constant performance, higher integrated into the project at the right times and that risks must be accepted. they address the relevant issues. One of the objectives for Chapter 6 is to develop an • To reduce risk at constant performance, higher understanding of how these specialty engineers costs must be accepted. contribute to the objective of systems engineering. In both systems analysis and systems In this context, time in the schedule is often a engineering, the amounts and kinds of resources to critical resource, so that schedule behaves like a be made available for the creation of the system are kind assumed to be among the decisions to be made. f tsubjective assessments of relative value. When this Systems engineering concentrates on the creation ofcan be done, trades between attributes can be hardware and software architectures and on theassessed quantitatively. Often, however, the development and management of the interfacesattributes seem to be truly incommensurate; between subsystems, while relying on systemsmanagers must make their decisions in spite of this analysis to construct the mathematical models andmultiplicity. analyze the data to evaluate alternative designs and to perform the actual design trade studies. Systems2.4 Disciplines Related to Systems analysis often requires the use of tools fromEngineering operations research, economics, or other decision sciences, and systems analysis curricula generally The definition of systems engineering given include extensive study of such topics as probability,in Section 2.2 could apply to the design task facing a statistics, decision theory, queueing theory, gamebridge designer, a radio engineer, or even a theory, linear and non-linear programming, and so on.committee chair. The systems engineering process In practice, many system engineers academiccan be a part of all of these. It cannot be the whole of background is richer in the engineering disciplinesthe job—the bridge designer must know the than in the decision sciences. As a consequence, theproperties of concrete and steel, the radio engineer system engineer is often a consumer of systemsmust apply Maxwells equations, and a committee analysis products, rather than a producer of them.chair must understand the personalities of the One of the major objectives for Chapter 5 is tomembers of the committee. In fact, the optimization of develop an understanding and appreciation of thesystems requires collaboration with experts in a state of that art.variety of disciplines, some of which are compared to Operations research and operationssystems engineering in the remainder of this section. engineering confine their attention to systems whose The role of systems engineering differs from components are assumed to be more or lessthat of system management in that engineering is an immutable. That is, it is assumed that the resourcesanalytical, advisory and planning function, while with which the system operates cannot be changed,management is the decision-making function. Very but that the way in which they are used is amenableoften, the distinction is irrelevant, as the same to optimization. Operations research techniques oftenindividuals may perform both roles. When no factors provide powerful tools for the optimization of systementer the decision-making process other than those designs.that are covered by the analyses, system Within NASA, terms such as missionmanagement may delegate some of the management analysis and engineering are often used to describeresponsibility to the systems engineering function. all study and design efforts that relate to Systems engineering differs from what might determination of what the projects mission should bebe called design engineering in that systems and how it should be carried out. Sometimes theengineering deals with the relationships of the thing scope is limited to the study of future projects.being designed to its supersystem (environment) and Sometimes the charters of organizations with suchsubsystems, rather than with the internal details of names include monitoring the capabilities of systems,how it is to accomplish its objectives. The systems ensuring that important considerations have not beenviewpoint is broad, rather than deep: it encompasses overlooked, and overseeing trades between majorthe system functionally from end to end and systems— thereby encompassing operationstemporally from conception to disposal. research, systems analysis, and systems engineering System engineers must also rely on activities.contributions from the specialty engineering Total quality management (TQM) is thedisciplines, in addition to the traditional design application of systems engineering to the workdisciplines, for functional expertise and specialized environment. That is, part of the total quality
  • NASA Systems Engineering Handbook Page 6management paradigm is the realization that anoperating organization is a particular kind of systemand should be engineered as one. A variety ofspecialized tools have been developed for thisapplication area; many of them can be recognized asestablished systems engineering tools, but withdifferent names. The injunction to focus on thesatisfaction of customer needs, for example, is evenexpressed in similar terms. The use of statisticalprocess control is akin to the use of technicalperformance and earned value measurements.Another method, qualify function deployment (QFD),is a technique of requirements analysis often used insystems engineering. The systems approach is common to all ofthese related fields. Essential to the systemsapproach is the recognition that a system exists, thatit is embedded in a supersystem on which it has animpact, that it may contain subsystems, and that thesystems objectives must be understood preferablyexplicitly identified.2.5 The Doctrine of Successive Refinement The realization of a system over its life cycle integration of lower-level elements is a part of theresults from a succession of decisions among systems engineering process. In fact, there is alwaysalternative courses of action. If the alternatives are a danger that the top-down process cannot keep upprecisely enough defined and thoroughly enough with the bottom-up process.understood to be well differentiated in the cost- There is often an early need to resolve theeffectiveness space, then the system manager can issues (such as the system architecture) enough somake choices among them with confidence. that the system can be modeled with sufficient realism The systems engineering process can be to do reliable trade studies.thought of as the pursuit of definition and When resources are expended toward theunderstanding of design alternatives to support those implementation of one of several design options, thedecisions, coupled with the overseeing of their resources required to complete the implementation ofimplementation. To obtain assessments that are crisp that design decrease (of course), while there isenough to facilitate good decisions, it is often usually little or no change in the resources that wouldnecessary to delve more deeply into the space of be required by unselected alternatives. Selectedpossible designs than has yet been done, as is alternatives thereby become even more attractiveillustrated in Figure 3. than those that were not selected. It should be realized, however, that this Consequently, it is reasonable to expect thespiral represents neither the project life cycle, which system to be defined with increasingly betterencompasses the system from inception through resolution as time passes. This tendency is formalizeddisposal, nor the product development process by at some point (in Phase B) by defining a baselinewhich the system design is developed and system definition. Usually, the goals, objectives, andimplemented, which occurs in Phases C and D (see constraints are baselined as the requirements portionChapter 3) of the project life cycle. Rather, as the of the baseline. The entire baseline is then subjectedintellectual process of systems engineering, it is to configuration control in an attempt to ensure thatinevitably reflected in both of them. successive changes are indeed improvements. As the system is realized, its particulars become clearer—but also harder to change. As statedFigure 3—The Doctrine of Successive Refinement above, the purpose of systems engineering is to make sure that the development process happens in a way that leads to the most cost-effective final system. The Figure 3 is really a double helix—each basic idea is that before those decisions that are hardcreate concepts step at the level of design to undo are made, the alternatives should be carefullyengineering initiates a ca-pabilities definition spiral assessed.moving in the opposite direction. The concepts can The systems engineering process is appliednever be created from whole cloth. Rather, they result again and again as the system is developed. As thefrom the synthesis of potential capabilities offered by system is realized, the issues addressed evolve andthe continually changing state of technology. This the particulars of the activity change.process of design concept development by the
  • NASA Systems Engineering Handbook Page 7 As an Example of the Process of Successive but its first cause. It could be argued that recognition Refinement, Consider the Choice of Altitude for a of the need or opportunity for a new system is an Space Station such as Alpha entrepreneurial activity, rather than an engineering one.• The first issue is selection of the general The end result of this step is the discoverylocation. Alternatives include Earth orbit, one of and delineation of the systems goals, which generallythe Earth-Moon Lagrange points, or a solar orbit. express the desires and requirements of the eventualAt the current state of technology, cost and risk users of the system. In the NASA context, theconsiderations made selection of Earth orbit an systems goals should also represent the long termeasy choice for Alpha. • Having chosen Earth interests of the taxpaying public.orbit, it is necessary to select an orbit region.Alternatives include low Earth orbit (LEO), high Identify and Quantify Goals. Before it is possible toEarth orbit and geosynchronous orbit; orbital compare the cost-effectiveness of alternative systeminclination and eccentricity must also be chosen. design concepts, the mission to be performed by theOne of many criteria considered in choosing LEO system must be delineated. The goals that arefor Alpha was the design complexity associated developed should cover all relevant aspects ofwith passage through the Van Allen radiation effectiveness, cost, schedule, and risk, and should bebelts. traceable to the goals of the supersystem. To make it• System design choices proceed to the selection easier to choose among alternatives, the goals shouldof an altitude maintenance strategy—rules that be stated in quantifiable, verifiable terms, insofar asimplicitly determine when, where, and why to re- that is possible and meaningful to do.boost, such as "maintain altitude such that there It is also desirable to assess the constraintsare always at least TBD days to reentry," "collision that may apply. Some constraints are imposed by theavoidance maneuvers shall always increase the state of technology at the time of creating oraltitude," "reboost only after resupply flights that modifying system design concepts. Others mayhave brought fuel," "rotate the crew every TBD appear to be inviolate, but can be changed by higherdays." levels of management. The assumptions and other• A next step is to write altitude specifications. relevant information that underlie constraints shouldThese choices might consist of replacing the always be recorded so that it is possible to estimateTBDs (values to be determined) in the altitude the benefits that could be obtained from theirstrategy with explicit numbers. relaxation.• Monthly operations plans are eventually part of At each turn of the spiral, higher-level goalsthe complete system design. These would include are analyzed. The analysis should identify thescheduled reboost burns based on predictions of subordinate enabling goals in a way that makes themthe accumulated effect of drag and the details of traceable to the next higher level. As the systemson-board microgravity experiments. engineering process continues, these are documented as functional requirements (what must• Actual firing decisions are based on determinations of be done to achieve the next-higher-level goals) andthe orbit which results from the momentum actuallyadded by previous firings, the atmospheric density as performance requirements (quantitativevariations actually encountered, and so on. descriptions of how well the functional requirements must be done). A clear operations concept often helps Note that decisions at every step require that to focus the require-ments analysis so that boththe capabilities offered by available technology be functional and performance requirements areconsidered—often at levels of design that are more ultimately related to the original need or opportunity.detailed than seems necessary at first In later turns of the spiral, further elaborations may become documented as detailed functional and Most of the major system decisions (goals, performance specifications.architecture, acceptable life-cycle cost, etc.) are madeduring the early phases of the project, so the turns of Create Alternative Design Concepts. Once it isthe spiral (that is, the successive refinements) do not under-stood what the system is to accomplish, it iscorrespond precisely to the phases of the system life possible to devise a variety of ways that those goalscycle. Much of the system architecture can be seen" can be met. Sometimes, that comes about as aeven at the outset, so the turns of the spiral do not consequence of considering alternative functionalcorrespond exactly to development of the allocations and integrating available subsystemarchitectural hierarchy, either. Rather, they design options. Ideally, as wide a range of plausiblecorrespond to the successively greater resolution by alternatives as is consistent with the designwhich the system is defined. organizations charter should be defined, keeping in Each of the steps in the systems engineering mind the current stage in the process of successiveprocess is discussed below. refinement. When the bottom-up process is operating, a problem for the system engineer is that theRecognize Need/Opportunity. This step is shown in designers tend to become fond of the designs theyFigure 3 only once, as it is not really part of the spiral create, so they lose their objectivity; the system
  • NASA Systems Engineering Handbook Page 8engineer often must stay an "outsider" so that there is associated with the continually improving resolutionmore objectivity. reduces the spread between upper and lower bounds On the first turn of the spiral in Figure 3, the as the process proceeds.sub-ject is often general approaches or strategies,sometimes architectural concepts. On the next, it is Select Concept. Selection among the alternativelikely to be functional design, then detailed design, design concepts is a task for the system manager,and so on. who must take into account the subjective factors that The reason for avoiding a premature focus the system engineer was unable to quantify, inon a single design is to permit discovery of the truly addition to the estimates of how well the alternativesbest design. Part of the system engineers job is to meet the quantified goals (and any effectiveness,ensure that the design concepts to be compared take cost, schedule, risk, or other constraints).into account all interface requirements. "Did you When it is possible, it is usually well worthinclude the cabling?" is a characteristic question. the trouble to develop a mathematical expression,When possible, each design concept should be called an objective function, that expresses the valuesdescribed in terms of controllable design parameters of combinations of possible outcomes as a singleso that each represents as wide a class of designs as measure of cost-effectiveness, as is illustrated inis reasonable. In doing so, the system engineer Figure 4, even if both cost and effectiveness must beshould keep in mind that the potentials for change described by more than one measure. Whenmay include organizational structure, schedules, achievement of the goals can be quantitativelyprocedures, and any of the other things that make up expressed by such an objective function, designs cana system. When possible, constraints should also be be compared in terms of their value. Risks associateddescribed by parameters. with design concepts can cause these evaluations to Owen Morris, former Manager of the Apollo be somewhat nebulous (because they are uncertainSpacecraft Program and Manager of Space Shuttle and are best described by probability distributions). InSystems and Engineering, has pointed out that it is this illustration, the risks are relatively high for designoften useful to define design reference missions concept A. There is little risk in either effectiveness orwhich stress all of the systems capabilities to a cost for concept B. while the risk of an expensivesignificant extent and which al1 designs will have to failure is high for concept C, as is shown bybe able to accomplish. The purpose of such missionsis to keep the design space open. Consequently, itcan be very dangerous to write them into the systemspecifications, as they can have just the oppositeeffect.Do Trade Studies. Trade studies begin with anassess-ment of how well each of the designalternatives meets the system goals (effectiveness,cost, schedule, and risk, both quantified andotherwise). The ability to perform these studies isenhanced by the development of system models thatrelate the design parameters to those assessments—but it does not depend upon them.Controlled modification and development of designconcepts, together with such system models, oftenpermits the use of formal optimization techniques tofind regions of the design space that warrant furtherinvestigation— those that are closer to the optimum Figure 4—A Quantitative Objective Function, De"surface indicated in Figure 1. pendent on Life-Cycle Cost and All Aspects of Whether system models are used or not, the Effec-tiveness.design concepts are developed, modified,reassessed, and compared against competing the cloud of probability near the x axis with a high costalternatives in a closed-loop process that seeks the and essentially no effectiveness. Schedule factorsbest choices for further development. System and may affect the effectiveness values, the cost values,subsystem sizes are often determined during the and the risk distributions.trade studies. The end result is the determination of The mission success criteria for systemsbounds on the relative cost-effectivenesses of the differ significantly. In some cases, effectiveness goalsdesign alternatives, measured in terms of the may be much more important than all others. Otherquantified system goals. (Only bounds, rather than projects may demand low costs, have an immutablefinal values, are possible because determination of schedule, or require minimization of some kinds ofthe final details of the design is intentionally deferred. risks. Rarely (if ever) is it possible to produce aThe bounds, in turn, may be derived from the combined quantitative measure that relates all of theprobability density functions.) Increasing detail important factors, even if it is expressed as a vector
  • NASA Systems Engineering Handbook Page 9with several components. Even when that can be often desirable. The accounting system should followdone, it is essential that the underlying factors and (not lead) the system architecture. In terms ofrelationships be thoroughly revealed to and breadth, partitioning should be done essentially all atunderstood by the system manager. The system once. As with system design choices, alternative partimanager must weigh the importance of the -tioning plans should be considered and comparedunquantifiable factors along with the quantitative data before implementation.provided by the system engineer. If a requirements-driven design paradigm is used for Technical reviews of the data and analyses tile development of the system architecture, it must beare an important part of the decision support applied with care, for the use of "shells" creates apackages prepared for the system manager. The tendency for the requirements to be treated asdecisions that are made are generally entered into the inviolable con straints rather than as agents of theconfiguration management system as changes to (or objectives. A goal, ob -jective or desire should neverelaborations of) the system baseline. The supporting be made a requirement until its costs. are understoodtrade studies are archived for future use. An essential and the buyer is willing to pay for it. The capability tofeature of the systems engineering process is that compute the effects of lower -level decisions on thetrade studies are performed before decisions are quantified goals should be maintained throughout themade. They can then be baselined with much more partitioning process. That is, there should be a goalsconfidence. flowdown embedded in the requirements allocation At this point in the systems engineering process.process, there is a logical branch point. For those The process continues with creation of aissues for which the process of successive refinement variety of alternative design concepts at the next levelhas proceeded far of resolution, construction of models that permit prediction of how well those alternatives will satisfy Simple Interfaces are Preferred the quantified goals, and so on. It is imperative that According to Morris, NASAs former Acting plans for subsequent integration be laid throughout Administrator George Low, in a 1971 paper titled the partitioning. Integration plans include verification "What Made Apollo a Success," noted that only and validation activities as a matter of course. 100 wires were needed to link the Apollo spacecraft to the Saturn launch vehicle. He Implement the Selected Design Decisions. When emphasized the point that a single person could the process of successive refinement has proceeded fully understand the interface and cope with all the far enough, the next step is to reverse the partitioningenough, the next step is to implement the decisions at process. When applied to the system architecture,that level of resolution (that is, unwind the recursive this "unwinding" of the process is called systemprocess). For those issues that are still insufficiently integration. Conceptual system integration takesresolved, the next step is to refine the development place in all phases of the project life cycle. That is,further. when a design approach has been selected, the approach is verified by "unwinding the process" to testIncrease the Resolution of the Design. One of the whether the concept at each physical level meets thefirst issues to be addressed is how the system should expectations and requirements. Physical integration isbe subdivided into subsystems. (Once that has been accomplished during Phase D. At the finer levels ofdone, the focus changes and the subsystems become resolution, pieces must be tested, assembled and/orsystems -- from the point of view of a system integrated, and tested again. The system engineersengineer. The partitioning process stops when the role includes the performance of the delegatedsubsystems are simple enough to be managed management du ties, such as configuration controlholistically.) As noted by Morris, "the divi- be and overseeing the integration, verification, andmanaged holistically.) As noted by Morris, "the validation process.division of program activities to minimize the number The purpose of verification of subsystemand complexity of interfaces has a strong influence on integration is to ensure that the subsystems conformthe overall program cost and the ability of the program to what was designed and interface with each otherto meet schedules." as expected in all re spects that are important: Charles Leising and Arnold Ruskin have mechanical connections, effects on center of mass(separately) pointed out that partitioning is more art and products of inertia, electromagnetic interference,than science, but that there are guidelines available: connector impedance and voltage, power conTo make interfaces clean and simple, similar sumption, data flow, and so on. Validation consists offunctions, designs and tech nologies should be ensuring that the interfaced subsystems achieve theirgrouped. Each portion of work should be verifiable. intended results. While validation is even morePieces should map conveniently onto the important than verification, it is usually much moreorganizational structure. Some of the functions that difficult to accomplish.are needed throughout the design (such as electricalpower) or throughout the organization (such as Perform the Mission. Eventually, the system ispurchasing) can be centralized. Standardization—of called upon to meet the need or seize the opportunitysuch things as parts lists or reporting formats—is for which it was designed and built.
  • NASA Systems Engineering Handbook Page 10 The system engineer continues to perform avariety of supporting functions, depending on thenature and duration of the mission. On a large projectsuch as Space Station Alpha, some of thesecontinuing functions include the validation of systemeffectiveness at the operational site, overseeing themaintenance of configuration and logisticsdocumentation, overseeing sustaining engineeringactivities, compiling development and operations"lessons reamed" documents, and, with the help ofthe specialty engineering disciplines, identifyingproduct improvement opportunities. On smallersystems, such as a Spacelab payload, only the lasttwo may be needed.
  • NASA Systems Engineering Handbook Page 113 The Project Life Cycle for Major in government. In general, the engineeringNASA Systems development life cycle is dependent on the technical nature of whats being developed, and the project life cycle may need to be tailored accordingly. Barry W. One of the fundamental concepts used Boehm described how several contemporary softwarewithin NASA for the management of major systems is development processes work; in some of thesethe pro-gram/ project life cycle, which consists of a processes, the development and constructioncategorization of everything that should be done to activities proceed in parallel, so that attempting toaccomplish a project into distinct phases, separated separate the associated phases on a time line isby control gates. Phase boundaries are defined so undesirable. Boehm describes a spiral, which reflectsthat they provide more-or-less natural points for the doctrine of successive refinement depicted ingo/no-go decisions. Decisions to proceed may be Figure 3, but Boehms spiral describes the softwarequalified by liens that must be removed within a product development process in particular. Hisreasonable time. A project that fails to pass a control discussion applies as well to the development ofgate and has enough resources may be allowed to hardware products as it does to software. Other“go back to the drawing board"—or it may be exam-ples of alternative processes are the rapidterminated. prototyping and rapid development approaches. All systems start with the recognition of a Selection of a product development process paradigmneed or the discovery of an opportunity and proceed must be a case-dependent decision, based on thethrough various stages of development to a final system engineers judgment and experience.disposition. While the most dramatic impacts of the Sometimes, it is appropriate to perform someanalysis and optimization activities associated with long-lead-time activities ahead of the time they wouldsystems engineering are obtained in the early stages, nominally be done. Long-lead-time activities mightdecisions that affect millions of dollars of value or cost consist of technology developments, prototypecontinue to be amenable to the systems approach construction and testing, or even fabrication of difficulteven as the end of the system lifetime approaches. components. Doing things out of their usual sequence Decomposing the project life cycle into increases risk in that those activities could wind upphases or-ganizes the entire process into more having been either unnecessary or improperlymanageable pieces. The project life cycle should specified. On the other hand, overall risk canprovide managers with incremental visibility into the sometimes be reduced by removal of such activitiesprogress being made at points in time that fit with the from the critical path.management and budgetary environments. NASA Figure 5 (foldout, next page) details thedocuments governing the acquisition of major resulting management and major systemssystems (NMI 7120.4 and NHB 7120.5) define the engineering products and control gates thatphases of the project life cycle as: characterize the phases in NMI 7120.4 and NHB 7120.5. Sections 3.1 to 3.6 contain narrative• Pre-Phase A—Advanced Studies ("find a suitable descriptions of the purposes, major activities, project") products, and control gates of the NASA project life• Phase A—Preliminary Analysis ("make sure the cycle phases. Section 3.7 provides a more project is worthwhile") concentrated discussion of the role of systems• Phase B—Definition ("define the project and engineering in the process. Section 3.8 describes the establish a preliminary design") NASA budget cycle within which program/project• Phase C—Design ("complete the system design") managers and system engineers must operate. Phase D — Development ("build, integrate, and verify the system, and prepare for operations") 3.1 Pre-Phase A—Advanced Studies• Phase E—Operations ("operate the system and dispose of it properly"). The purpose of this activity, which is usually per- formed more or less continually by "Advanced Phase A efforts are conducted by NASA field Projects" groups, is to uncover, invent, create,cen-ters; such efforts may rely, however, on pre- concoct and/or devise a broad spectrum of ideas andPhase A in-house and contracted advanced studies. alternatives for missions from which new projectsThe majority of Phase B efforts are normally (programs) can be selected. Typically, this activityaccomplished by industry under NASA contract, but consists of loosely structured ex-aminations of newNASA field centers typically conduct parallel in-house ideas, usually without central control and mostlystudies in order to validate the contracted effort and oriented toward small studies. Its major product is aremain an informed buyer. NASA usually chooses to stream of suggested projects, based on thecontract with industry for Phases C and D, and often identification of needs and the discovery ofdoes so for Phase E. Phase C is nominally combined opportunities that are potentially consistent withwith Phase D, but when large production quantities NASAs mission. capabilities, priorities, andare planned, these are treated separately. resources. Alternatives to the project phases describedabove can easily be found in industry and elsewhere
  • NASA Systems Engineering Handbook Page 12 Pre-Phase A—Advanced Studies system before seeking significant funding. According to NHB 7120.5, the major products of this phase are a Purpose: To produce a broad spectrum of ideas formal Mission Needs Statement (MNS) and one or and alternatives for missions from which new pro- more credible, feasible designs and operations grams/ projects can be selected. concepts. John Hodge describes this phase as "a Major Activities and their Products: Identify structured version of the previous phase." missions consistent with charter Identify and Phase A—Preliminary Analysis involve users Perform preliminary evaluations of possible Purpose: To determine the feasibility and missions Prepare program/project proposals, desirability of a suggested new major system and which include: its compatibility with NASAs strategic plans. • Mission justification and objectives Major Activities and their Products: • Possible operations concepts Prepare Mission Needs Statement • Possible system architectures Develop top-level requirements • Cost, schedule, and risk estimates. Develop corresponding evaluation criteria/metrics Develop master plans for existing program areas Identify alternative operations and logistics Information Baselined: concepts (nothing) Identify project constraints and system boundaries Control Gates: Consider alternative design concepts, including: Mission Concept Review feasibility and risk studies, cost and schedule Informal proposal reviews estimates, and advanced technology requirements In the NASA environment, demands for new Demonstrate that credible, feasible design(s) existsystems derive from several sources. A major one is Acquire systems engineering tools and modelsthe opportunity to solve terrestrial problems that may Initiate environmental impact studiesbe ad-dressed by putting instruments and other Prepare Project Definition Plan for Phase Bdevices into space. Two examples are weather Information Baselined:prediction and communications by satellite. General (nothing)improvements in technology for use in space will Control Gates:continue to open new possibilities. Such opportunities Mission Definition Revieware rapidly perceived as needs once the magnitude of Preliminary Non-Advocate Reviewtheir value is understood. Preliminary Program/Project Approval Review Technological progress makes possiblemissions that were previously impossible. Mannedtrips to the moon and the taking of high resolution In Phase A, a larger team, often associatedpictures of planets and other objects in the universe with an ad hoc program or project office, readdressesillustrate past responses to this kind of opportunity. the mission concept to ensure that the projectNew opportunities will continue to become available justification and practicality are sufficient to warrant aas our technological capabilities grow. place in NASAs budget. The teams effort focuses on Scientific progress also generates needs for analyzing mission requirements and establishing aNASA systems. As our understanding of the universe mission architecture. Activities become formal, andaround us continues to grow, we are able to ask new the emphasis shifts toward establishing optimalityand more precise questions. The ability to answer rather than feasibility. The effort addresses morethese questions often depends upon the changing depth and considers many alternatives. Goals andstate of technology. objectives are solidified, and the project develops Advanced studies may extend for several more definition in the system requirements, top-levelyears, and may be a sequence of papers that are only system architecture, and operations concept.loosely connected. These studies typically focus on Conceptual designs are developed and exhibit moreestablishing mission goals and formulating top-level engineering detail than in advanced studies.system requirements and operations concepts. Technical risks are identified in more detail andConceptual designs are often offered to demonstrate technology development needs become focused.feasibility and support programmatic estimates. The The Mission Needs Statement is not shownemphasis is on establishing feasibility and desirability in the sidebar as being baselined, as it is not underrather than optimality. Analyses and designs are configuration control by the project. It may be underaccordingly limited in both depth and number of configuration control at the program level, as may theoptions. program requirements documents and the Preliminary Program Plan.3.2 Phase A—Preliminary AnalysisThe purpose of this phase is to further examine thefeasibility and desirability of a suggested new major
  • NASA Systems Engineering Handbook Page 14 seek out more cost-effective designs. (Trade studies should precede—rather than follow—system designFigure 6 – Overruns are Very Likely if Phases A decisions. Chamberlain, Fox, and Duquette describeand B are Underfunded. Phase B -- Definition Purpose: To define the project in enough detail to establish an initial baseline capable of meeting mission needs. Major Activities and their Products: Prepare a Systems Engineering Management Plan Prepare a Risk Management Plan Initiate configuration management Prepare engineering specialty program plans Develop system-level cost-effectiveness model Restate mission needs as functional requirements Identify science payloads Establish the initial system requirements and verification requirements matrix Perform and archive trade studies Select a baseline design solution and a concept of operations3.3 Phase B -- Definition Define internal and external interface requirements The purpose of this phase is to establish an (Repeat the process of successive refinement toinitial project baseline, which (according to NHB get "design-to" specifications and drawings,7120.5) includes "a formal flowdown of the project- verifications plans, and interface documents tolevel performance requirements to a complete set of lower levels as appropriate)system and subsystem design specifications for both Define the work breakdown structureflight and ground elements" and "corresponding Define verification approach end policiespreliminary designs." The technical requirements Identify integrated logistics support requirementsshould be sufficiently detailed to establish firm Establish technical resource estimates and firmschedule and cost estimates for the project. life-cy-cle cost estimates Develop statement(s) of work A Credible, Feasible Design Initiate advanced technology developments Revise and publish a Project Plan A feasible system design is one that can be Reaffirm the Mission Needs Statement implemented as designed and can then Prepare a Program Commitment Agreement accomplish the systems goals within the Information Baselined: constraints imposed by the fiscal and operating System requirements and verification environment. To be credible, a design must not requirements matrix depend on the occurrence of unforeseen System architecture and work breakdown breakthroughs in the state of the art. While a structure credible design may assume likely improvements Concept of operations in the state of the art, it is nonetheless riskier than “Design-to” specifications at all levels Actually, "the" Phase B baseline consists of Project plans, including schedule, resources,a collection of evolving baselines covering technical acquisition strategies and risk managementand business aspects of the project: system (and a decentralized process for ensuring that such tradessubsystem) requirements and specifications, designs, lead efficiently to an optimum system design.) Majorverification and operations plans, and so on in the products to this point include an accepted "functional"technical portion of the baseline, and schedules, cost baseline and preliminary "design-to" baseline for theprojections, and management plans in the business system and its major end items. The effort alsoportion. Establishment of baselines implies the produces various engineering and management plansimplementation of configuration management to prepare for managing the projects downstreamprocedures. (See Section 4.7.) processes, such as verification and operations, and Early in Phase B, the effort focuses on for implementing engineering specialty programs.allocating functions to particular items of hardware, Along the way to these products, projectssoftware, personnel, etc. System functional and are subjected to a Non-Advocate Review, or NAR.performance requirements along with architectures This activity seeks to assess the state of projectand designs become firm as system trades and definition in terms of its clarity of objectives and thesubsystem trades iterate back and forth in the effort to thoroughness of technical and management plans,
  • NASA Systems Engineering Handbook Page 15technical documentation, alternatives explored, and of the system hierarchy. The CDR is held prior to thetrade studies performed. The NAR also seeks to start of fabrication/production of end items forevaluate the cost and schedule estimates, and the hardware and prior to the start of coding of deliverablecontingency reserve in these estimates. The timing of software products. Typically, the sequence of CDRsthis review is often driven by the Federal budget reflects the integration process that will occur in thecycle, which requires at least 16 months between next phase— that is, from lower-level CDRs to theNASAs budget preparation for submission to the system-level CDR. Projects, however, should tailorPresidents Office of Management and Budget, and the sequencing of the reviews to meet their individualthe Congressional funding for a new project start. needs. The final product of this phase is a "build-to"(See Section 3.8.) There is thus a natural tension baseline in sufficient detail that actual production canbetween the desire to have maturity in the project at proceed.the time of the NAR and the desire to progressefficiently to final design and development. Phase C—Design Later in Phase B, the effort shifts toestablishing a functionally complete design solution Purpose: To complete the detailed design of the(i.e., a "design-to" baseline) that meets mission goals sys-tem (and its associated subsystems, includingand objectives. Trade studies continue. Interfaces its operations systems).among the major end items are defined. Engineering Major Activities and their Products:test items may be developed and used to derive data Add remaining lower-level design specifications tofor further design work, and project risks are reduced the system architectureby successful technology developments and Refine requirements documentsdemonstrations. Phase B culminates in a series of Refine verification planspreliminary design reviews (PDRs), containing the Prepare interface documentssystem-level PDR and PDRs for lower-level end items (Repeat the process of successive refinement toas appropriate. The PDRs reflect the successive get "build-to" specifications and drawings,refinement of requirements into designs. Design verification plans, and interface documents at allissues uncovered in the PDRs should be resolved so levels)that final design can begin with unambiguous "design- Augment baselined documents to reflect theto" specifications. From this point on, almost all growing maturity of the system: systemchanges to the baseline are expected to represent architecture, verification requirements matrix,successive refinements, not fundamental changes. work breakdown structure, project plansPrior to baselining, the system architecture, Monitor project progress against project planspreliminary design, and operations concept must have Develop the system integration plan and thebeen validated by enough technical analysis and system operation plandesign work to establish a credible, feasible design at Perform and archive trade studiesa lower level of detail than was sufficient for Phase A. Complete manufacturing plan Develop the end-to-end information system3.4 Phase C—Design design Refine Integrated Logistics Support Plan The purpose of this phase is to establish a Identify opportunities for pre-planned productcomplete design (“build-to" baseline) that is ready to improvementfabricate (or code), integrate, and verify. Trade Confirm science payload selectionstudies continue. Engineering test units more closely Information Baselined:resembling actual hardware are built and tested so as All remaining lower-level requirements andto establish confidence that the design will function inthe expected environments. Engineering specialty 3.5 Phase D—Developmentanalysis results are integrated into the design, and themanufacturing process and controls are defined and The purpose of this phase is to build andvalidated. Configuration management continues to verify the system designed in the previous phase,track and control design changes as detailed deploy it, and prepare for operations. Activitiesinterfaces are defined. At each step in the successive include fabrication of hardware and coding ofrefinement of the final design, corresponding software, integration, and verification of the system.integration and verification activities are planned in Other activities include the initial training of operatinggreater detail. During this phase, technical personnel and implementation of the Integratedparameters, schedules, and budgets are closely Logistics Support Plan. For flight projects, the focus oftracked to ensure that undesirable trends (such as an activities then shifts to pre-launch integration andunexpected growth in spacecraft mass or increase in launch. For large flight projects, there may be anits cost) are recognized early enough to take extended period of orbit insertion, assembly, andcorrective action. (See Section 4.9.) initial shake-down operations. The major product is a Phase C culminates in a series of critical system that has been shown to be capable ofdesign reviews (CDRs) containing the system-level accomplishing the purpose for which it was created.CDR and CDRs corresponding to the different levels
  • NASA Systems Engineering Handbook Page 16 involve major changes to the system architecture; changes of that scope constitute new "needs," and the project life cycle starts over.3.6 Phase E—Operations Phase D—Development Phase E—Operations Purpose: To build the subsystems (including the Purpose: To actually meet the initially identified op-erations system) and integrate them to create need or to grasp the opportunity, then to the system, meanwhile developing confidence dispose of the system in a responsible manner. that it will be able to meet the system Major Activities and their Products: requirements, then to deploy the system and Train replacement operators and maintainers ensure that it is ready for operations. Conduct the mission(s) Major Activities and their Products Maintain and upgrade the system Fabricate (or code) the parts (i.e., the lowest-level Dispose of the system and supporting processes items in the system architecture) Document Lessons Learned Integrate those items according to the integration Information Baselined: plan and perform verifications, yielding verified Mission outcomes, such as: components and subsystems • Engineering data on system, subsystem and (Repeat the process of successive integration to materials get a verified system) performance Develop verification procedures at all levels Perform system qualification verification(s) • Science data returned Perform system acceptance verification(s) • High resolution photos from orbit Monitor project progress against project plans • Accomplishment records ("firsts") Archive documentation for verifications performed • Discovery of the Van Allen belts Audit "as-built" configurations • Discovery of volcanoes on lo. Document Lessons Learned Operations and maintenance logs Prepare operators manuals Problem/failure reports Prepare maintenance manuals Control Gates: Train initial system operators and maintainers Regular system operations readiness reviews Finalize and implement Integrated Logistics System upgrade reviews Support Plan Safety reviews Integrate with launch vehicle(s) and launch, Decommissioning Review perform orbit insertion, etc., to achieve a deployed system Perform operational verification(s) Phase E encompasses the problem of Information Baselined: dealing with the system when it has completed its "As-built" and "as-deployed" configuration data mission; the time at which this occurs depends on Integrated Logistics Support Plan many factors. For a flight system with a short mission Command sequences for end-to-end command duration, such as a Spacelab payload, disposal may and telemetry validation and ground data require little more than de-integration of the hardware processing and its return to its owner. On large flight projects of Operators manuals long duration, disposal may proceed according to Maintenance manuals long-established plans, or may begin as a result of Control Gates: unplanned events, such as accidents. Alternatively, Test Readiness Reviews (at all levels) technological advances may make it uneconomic to System Acceptance Review continue operating the system either in its current System functional and physical configuration configuration or an improved one. audits In addition to uncertainty as to when this part Flight Readiness Review(s) of the phase begins, the activities associated with Operational Readiness Review safely decommissioning and disposing of a system Safety reviews may be long and complex. Consequently, the costs and risks associated with different designs should be considered during the projects earlier phases.The purpose of this phase is to meet the initiallyidentified need or to grasp the initially identified 3.7 Role of Systems Engineering in theopportunity. The products of the phase are the results Project Life Cycleof the mission. This phase encompasses evolution ofthe system only insofar as that evolution does not
  • NASA Systems Engineering Handbook Page 17 This section presents two "idealized" As product development progresses, adescriptions of the systems engineering activities series of baselines is progressively established, eachwithin the project life cycle. The first is the Forsberg of which is put under formal configurationand Mooz "vee" chart, which is taught at the NASA management at the time it is approved. Among theprogram/project management course. me second is fundamental purposes of configuration managementthe NASA program/project life cycle process flow is to prevent requirements from "creeping."developed by the NASA-wide Systems Engineering The left side of the core of the vee is similarProcess Improvement Task team, in 1993/94. to the so-called "waterfall" or "requirements-driven design" model of the product development process.3.7.1 The "Vee" Chart The control gates define significant decision points in the process. Work should not progress beyond a Forsberg and Mooz describe what they call decision point until the project manager is ready to"the technical aspect of the project cycle" by a vee- publish and control the documents containing theshaped chart, starting with user needs on the upper decisions that have been agreed upon at that point.left and ending with a user-validated system on the However, there is no prohibition against doingupper right. Figure 7 provides a summary level detailed work early in the process. In fact, detailedoverview of those activities. On the left side of the hardware and/or software models may be required atvee, decomposition and definition activities resolve the very earliest stages to clarify user needs or tothe system architecture, creating the details of the establish credibility for the claim of feasibility. Earlydesign. Integration and verification flow up and to the application of involved technical and supportright as successively higher levels of subsystems are disciplines is an essential part of this process; this isverified, culminating at the system level. This in fact implementation of concurrent engineering.summary chart follows the basic outline of the vee At each level of the vee, systemschart developed by NASA as part of the Software engineering activities include off-core processes:Management and Assurance Program. ("CIs in the system design, advanced technology development,figure refer to the hardware and software trade studies, risk management, specialty engineeringconfiguration items, which are controlled by the analysis and modeling. This is shown on the chart asconfiguration management system.) an orthagonal process in Figure 7(b). These activities are performed at each level and may be repeatedDecomposition and Definition. Although not shown many times within a phase. While many kinds ofin Figure 7, each box in the vee represents a number studies and decisions are associated with the off-coreof parallel boxes suggesting that there may be many activities, only decisions at the core level are putsubsystems that make up the system at that level of under configuration management at the variousdecomposition. For the top left box, the various control gates. Off-core activities, analyses, andparallel boxes represent the alternative design models are used to substantiate the core decisionsconcepts that are initially evaluated. and to ensure that the risks have been mitigated or Figure 7—Overview of the Technical kAspect of the NASA Project Cycle.
  • NASA Systems Engineering Handbook Page 18determined to be acceptable. The off-core work is not and performance. This is a common approach informally controlled, but the analyses, data and results software development.should be archived to facilitate replication at the The incremental development approach isappropriate times and levels of detail to support easy to describe in terms of the vee chart: allintroduction into the baseline. increments have a common heritage down to the first There can, and should, be sufficient iteration PDR. The balance of the product developmentdownward to establish feasibility and to identify and process has a series of displaced and overlappingquantify risks. Upward iteration with the requirements vees, one for each release.statements (and with the intermediate products aswell) is permitted, but should be kept to a minimum 3.7.2 The NASA Program/Project Life Cycleunless the user is still generating (or changing) Process Flowrequirements. In software projects, upwardconfirmation of solutions with the users is often Another idealized description of the technicalnecessary because user requirements cannot be activities that occur during the NASA project life cycleadequately defined at the inception of the project. is illustrated in Figure 8 (foldout, next page). In theEven for software projects, however, iteration with figure, the NASA project life cycle is partitioned intouser requirements should be stopped at the PDR, or ten process flow blocks, which are called stages incost and schedule are likely to get out of control. this handbook. The stages reflect the changing natureModification of user requirements after PDR should of the work that needs to be performed as the systembe held for the next model or release of the product. If matures. These stages are related both temporallysignificant changes to user requirements are made and logically. Successive stages mark increasingafter PDR, the project should be stopped and system refinement and maturity, and require therestarted with a new vee, reinitiating the entire products of previous stages as inputs. A transition toprocess. The repeat of the process may be quicker a new stage entails a major shift in the nature orbecause of the lessons learned the first time through, extent of technical activities. Control gates assess thebut all of the steps must be redone. wisdom of progressing from one stage to another. Time and project maturity flow from left to (See Section 4.8.3 for success criteria for specificright on the vee. Once a control gate is passed, reviews.) From the perspective of the systembackward iteration is not possible. Iteration with the engineer, who must oversee and monitor theuser requirements, for example, is possible only technical progress on the system, Figure 8 provides avertically, as is illustrated on the vee. more complete description of the actual work needed through the NASA project life cycle.Integration and Verification. Ascending the right In practice, the stages do not always occurside of the vee is the process of integration and sequentially. Unfolding events may invalidate orverification. At each level, there is a direct modify goals and assumptions. This may neccessitatecorrespondence between activities on the left and revisiting or modifying the results of a previous stage.right sides of the vee. This is deliberate. The method The end items comprising the system often haveof verification must be determined as the different development schedules and constraints. Thisrequirements are developed and documented at each is especially evident in Phases C and D where somelevel. This minimizes the chances that requirements subsystems may be in final design while others are inare specified in a way that cannot be measured or fabrication and integration.verified. The products of the technical activities Even at the highest levels, as user support the systems engineering effort (e.g.,requirements are translated into system requirements, requirements and specifications, trade studies,the system verification approach, which will prove that specialty engineering analyses, verification results),the system does what is required, must be and serve as inputs to the various control gates. For adetermined. The technical demands of the verification detailed systems engineering product database,process, represented as an orthagonal process in database dictionary, and maturity guidelines, seeFigure 7(c), can drive cost and schedule, and may in JSC-49040, NASA Systems Engineering Process forfact be a discriminator between alternative concepts. Programs and Projects.For example, if engineering models are to be used for Several topics suggested by Figures 7 and 8verification or validation, they must be specified and merit special emphasis. These are concurrentcosted, their characteristics must be defined, and their engineering, technology insertion, and the distinctiondevelopment time must be incorporated into the between verification and validation.schedule from the beginning. Concurrent Engineering. If the project passes earlyIncremental Development. If the user requirements control gates prematurely, it is likely to result in aare too vague to permit final definition at PDR, one need for significant iteration of requirements andapproach is to develop the project in predetermined designs late in the development process. One wayincremental releases. The first release is focused on this can happen is by failing to involve the appropriatemeeting a minimum set of user requirements, with technical experts at early stages, thereby resulting insubsequent releases providing added functionality the acceptance of requirements that cannot be met
  • Page
  • Page
  • NASA Systems Engineering Handbook Page 21and the selection of design concepts that cannot be approach that is not dependent on the development ofbuilt, tested, maintained, and/or operated. new technology must be carried unless high risk isConcurrent engineering is the simultaneous acceptable. The technology development activityconsideration of product and process downstream should be managed by the project manager andrequirements by multidisciplinary teams. Specialty system engineer as a critical activity.engineers from all disciplines (reliability,maintainability, human factors, safety, logistics, etc.) Verification vs. Validation. The distinction betweenwhose expertise will eventually be represented in the verification and validation is significant: verificationproduct have important contributions throughout the consists of proof of compliance with specifications,system life cycle. The system engineer is responsible and may be determined by test, analysis,for ensuring that these personnel are part of the demonstration, inspection, etc. (see Section 6.6).project team at each stage. In large projects, many Validation consists of proof that the systemintegrated product development teams (PDTs) may accomplishes (or, more weakly, can accomplish) itsbe required. Each of these, in turn, would be purpose. It is usually much more difficult (and muchrepresented on a PDT for the next higher level in the more important) to validate a system than to verify it.project. In small projects, however, a small team is Strictly speaking, validation can be accomplished onlyoften sufficient as long as the system engineer can at the system level, while verification must beaugment it as needed with experts in the required accomplished throughout the entire systemtechnical and business disciplines. architectural hierarchy. The informational requirements of doingconcurrent engineering are demanding. One wayconcurrent engineering experts believe it can bemade less burdensome is by an automatedenvironment. In such an environment, systemsengineering, design and analysis tools can easily Integrated Product Development Teams The detailed evaluation of product and process feasibility and the identification of significant uncertainties (system risks) must be done by experts from a variety of disciplines. An approach that has been found effective is to establish teams for the development of the product with representatives from all of the disciplines and processes that will eventually be involved. These integrated product development teams often have multidisciplinary (technical and business) members. Technical personnel are needed to ensure that issues such as producibility, verifiability, deployability, supportability, trainability, operability, and disposability are all considered in the design. In addition, business (e.g., procurement! representatives are added to the team as the need arises. Continuity of support from these specialty discipline organizations throughout the system life-cycle is highly desirable, though team composition and leadership can be expected to change as the system progresses from phase to phase. Figure 9—Typical NASA Budget Cycle.exchange data, computing environments areinteroperable, and product data are readily accessibleand accurate. For more on the characteristics of 3.8 Funding: The Budget Cycleautomated environments, see for example Carter andBaker, Concurrent Engineering, 1992. NASA operates with annual funding from Congress. This funding results, however, from a three-yearTechnology Insertion. Projects are sometimes rolling process of budget formulation, budgetinitiated with known technology shortfalls, or with enactment, and finally, budget execution. A highlyareas for which new technology will result in simplified representation of the typical budget cycle issubstantial product improvement. Technology shown in Figure 9.development can be done in parallel with the project NASA starts developing its budget eachevolution and inserted as late as the PDR. A parallel January with economic forecasts and general
  • NASA Systems Engineering Handbook Page 22guidelines being provided by the Executive BranchsOffice of Management and Budget (OMB). In earlyMay, NASA conducts its Program Operating Plan(POP) and Institutional Operating Plan (IOP)exercises in preparation for submittal of a preliminaryNASA budget to the OMB. A final NASA budget issubmitted to the OMB in September for incorporationinto the Presidents budget transmittal to Congress,which generally occurs in January. This proposedbudget is then subjected to Congressional review andapproval, culminating in the passage of billsauthorizing NASA to obligate funds in accordancewith Congressional stipulations and appropriatingthose funds. The Congressional process generallylasts through the summer. In recent years, however,final bills have often been delayed past the start of thefiscal year on October 1. In those years, NASA hasoperated on continuing resolutions by Congress. With annual funding, there is an implicitfunding control gate at the beginning of every fiscalyear. While these gates place planning requirementson the project and can make significant replanningnecessary, they are not part of an orderly systemsengineering process. Rather, they constitute one ofthe sources of uncertainty that affect project risks andshould be consided in project planning.
  • NASA Systems Engineering Handbook Page 234 Management Issues in Systems work necessary to complete the project. Each task inEngineering the WBS should be traceable to one or more of the system requirements. Schedules, which are This chapter provides more specific structured as networks, describe the time-phasedinformation on the systems engineering products and activities that result in the product system in the WBS.approaches used in the project life cycle just The cost account structure needs to be directly linkeddescribed. These products and approaches are the to the work in the WBS and the schedules by whichsystem engineers contribution to project that work is done. (See Sections 4.3 through 4.5.)management, and are designed to foster structured The projects organization structureways of managing a complex set of activities. describes the clusters of personnel assigned to perform the work. These organizational structures are4.1 Harmony of Goals, Work Products, and usually trees. Sometimes they are represented as aOrganizations matrix of two interlaced trees, one for line responsibilities, the other for project responsibilities. In any case, the organizational structure should allow When applied to a system, the doctrine of identification of responsibility for each WBS task.successive refinement is a "divide-and-conquer" Project documentation is the product ofstrategy. Complex systems are successively divided particular WBS tasks. There are two fundamentalinto pieces that are less complex, until they are simple categories of project documentation: baselines andenough to be conquered. This decomposition results archives. Each category contains information aboutin several structures for describing the product system both the product system and the producing system.and the producing system ("the system that produces The baseline, once established, contains informationthe system"). These structures play important roles in describing the current state of the product system andsystems engineering and project management. Many producing system resulting from all decisions thatof the remaining sections in this chapter are devoted have been made. It is usually organized as ato describing some of these key structures. collection of hierarchical tree structures, and should Structures that describe the product system exhibit a significant amount of cross-reference linking.include, but are not limited to, the requirements tree, The archives contain all of the rest of the projectssystem architecture, and certain symbolic information information that is worth remembering, even if onlysuch as system drawings, schematics, and temporarily. The archives should contain alldatabases. The structures that describe the producing assumptions, data, and supporting analyses that aresystem include the projects work breakdown, relevant to past, present, and future decisions.schedules, cost accounts, and organization. These Inevitably, the structure (and control) of the archivesstructures provide different perspectives on their is much looser than that of the baseline, though crosscommon raison detre: the desired product system. references should be maintained where feasible. (SeeCreating a fundamental harmony among these Section 4.7.)structures is essential for successful systems The structure of reviews (and theirengineering and project management; this harmony associated control gates) reflect the time-phasedneeds to be established in some cases by one-to-one activities associated with the realization of the productcorrespondence between two structures, and in other system from its product breakdown. The statuscases, by traceable links across several structures. It reporting and assessment structure providesis useful, at this point, to give some illustrations of this information on the progress of those same activities.key principle. On the financial side, the status reporting and System requirements serve two purposes in assessment structure should be directly linked to thethe systems engineering process: first, they represent WBS, schedules, and cost accounts. On the technicala hierarchical description of the buyers desired side, it should be linked to the product breakdownproduct system as understood by the product and/or requirements tree. (See Sections 4.8 and 4.9.)development team (PDT). The interaction betweenthe buyer and system engineer to develop theserequirements is one way the "voice of the buyer" is 4.2 Managing the Systems Engineeringheard. Determining the right requirements— that is, Process:only those that the informed buyer is willing to pay The Systems Engineering Management Planfor—is an important part of the system engineers job. Systems engineering management is a technicalSecond, system requirements also communicate to function and discipline that ensures that systemsthe design engineers what to design and build (or engineering and all other technical functions arecode). As these requirements are allocated, they properly applied.become inexorably linked to the system architecture Each project should be managed inand product breakdown, which consists of the accordance with a project life cycle that is carefullyhierarchy of system, segments, elements, tailored to the projects risks. While the projectsubsystems, etc. (See the sidebar on system termi- manager concentrates on managing the overallnology on page 3.) project life cycle, the project-level or lead system The Work Breakdown Structure (WBS) is engineer concentrates on managing its technicalalso a tree-like structure that contains the pieces of aspect (see Figure 7 or 8). This requires that the
  • NASA Systems Engineering Handbook Page 24system engineer perform or cause to be performed Plan, depend on the SEMP and must comply with it.the necessary multiple layers of decomposition, The SEMP should be comprehensive and describedefinition? integration, verification and validation of how a fully integrated engineering effort will bethe system, while orchestrating and incorporating the managed and conducted.appropriate concurrent engineering. Each one ofthese systems engineering functions re-quires 4.2.2 Contents of the SEMPapplication of technical analysis skills and tech-niques. Since the SEMP describes the projects The techniques used in systems engineering technical management approach, which is driven bymanagement include work breakdown structures, the type of project, the phase in the project life cycle,network scheduling, risk management, requirements and the technical development risks. it must betraceability and reviews, baselines, configuration specifically written for each project to address thesemanagement, data management, specialty situations and issues. While the specific content of theengineering program planning, definition and SEMP is tailored to the project, the recommendedreadiness reviews, audits, design certification, and content is listed below.status reporting and assessment. The Project Plan defines how the project will Part I—Technical Project Planning and Control.be managed to achieve its goals and objectives within This section should identify organizationaldefined programmatic constraints. The Systems responsibilities and authority for systems engineeringEngineering Management Plan (SEMP) is the management, including control of contractedsubordinate document that defines to all project engineering; levels of control established forparticipants how the project will be technically performance and design requirements, and themanaged within the constraints established by the control method used; technical progress assuranceProject Plan. The SEMP communicates to all methods; plans and schedules for design andparticipants how they must respond to pre-established technical program/project reviews; and control ofmanagement practices. For instance, the SEMP documentation. This section should describe:should describe the means for both internal andexternal (to the project) interface control. The SEMP • The role of the project office The role of the useralso communicates how the systems engineering The role of the Contracting Office Technicalmanagement techniques noted above should be Representative (COTR)applied. • The role of systems engineering The role of design engineering4.2.1 Role of the SEMP • The role of specialty engineering • Applicable standards The SEMP is the rule book that describes to • Applicable procedures and trainingall participants how the project will be technically • Baseline control processmanaged. The responsible NASA field center should • Change control processhave a SEMP to describe how it will conduct its • Interface control processtechnical management, and each contractor should • Control of contracted (or subcontracted)have a SEMP to describe how it will manage in engineeringaccordance with both its contract and NASAstechnical management practices. Since the SEMP is • Data control process Make-or-buy controlproject- and contract-unique, it must be updated for processeach significant programmatic change or it will • Parts, materials, and process controlbecome outmoded and unused, and the project could • Quality controlslide into an uncontrolled state. The NASA field center • Safety controlshould have its SEMP developed before attempting to • Contamination controlprepare an initial cost estimate, since activities that • Electromagnetic interference andincur cost, such as tech-nical risk reduction, need to electromagnetic compatibility (EMI/EMC)be identified and described beforehand. The • Technical performance measurement processcontractor should have its SEMP developed during • Control gatesthe proposal process (prior to costing and pricing) • Internal technical reviewsbecause the SEMP describes the technical content of • Integration controlthe project, the potentially costly risk management • Verification controlactivities, and the verification and validation • Validation control.techniques to be used, all of which must be includedin the preparation of project cost estimates. Part II—Systems Engineering Process. This The project SEMP is the senior technical section should contain a detailed description of themanagement document for the project; all other process to be used, including the specific tailoring oftechnical control documents, such as the Interface the process to the requirements of the system andControl Plan, Change Control Plan. Make-or-Buy project; the procedures to be used in implementingControl Plan, Design Review Plan, Technical Audit the process; in-house documentation; the trade study
  • NASA Systems Engineering Handbook Page 25methodology; the types of mathematical and or aspect of the project life cycle, are developed. Thissimulation models to be used for system cost- becomes the keel of the project that ultimatelyeffectiveness evaluations; and the generation of determines the projects length and cost. Thespecifications. development of the programmatic and technicalThis section should describe the: management approaches requires that the key project personnel develop an understanding of the work to be• System decomposition process performed and the relationships among the various• System decomposition format parts of that work. (See Sections 4.3 and 4.4 on Work• System definition process Breakdown Structures and network schedules,• System analysis and design process respectively.) The SEMPs development requires• Requirements allocation process contributions from knowledgeable programmatic and• Trade study process technical experts from all areas of the project that can• System integration process significantly influence the projects outcome. The involvement of recognized experts is needed to• System verification process establish a SEMP that is credible to the project• System qualification process manager and to secure the full commitment of the• System acceptance process project team.• System validation process• Risk management process 4.2.4 Managing the Systems Engineering• Life-cycle cost management process Process: Summary• Specification and drawing structure• Configuration management process The systems engineering organization, and• Data management process specifically the project-level system engineer, is• Use of mathematical models responsible for managing the project through the• Use of simulations technical aspect of the project life cycle. This• Τools to be used. responsibility includes management of the decomposition and definition sequence, andPart III—Engineering Specialty Integration. This management of the integration, verification, andsec-tion of the SEMP should describe the integration validation sequence. Attendant with this managementand coordination of the efforts of the specialty is the requirement to control the technical baselines ofengineering disciplines into the systems engineering the project. Typically, these baselines are the:process during each iteration of that process. Where “functional," design to,” "build-to (or "code-to"), "as-there is potential for overlap of specialty efforts, the built" (or "as-coded"), and as-deployed." SystemsSEMP should define the relative responsibilities and engineering must ensure an efficient and logicalauthorities of each. This section should contain, as progression through these baselines.needed, the projects approach to: Systems engineering is responsible for system decomposition and design until the "design-to"• Concurrent engineering specifications of all lower-level configuration items• The activity phasing of specialty disciplines have been produced. Design engineering is then• The participation of specialty disciplines responsible for developing the build-to" and "code-to"• The involvement of specialty disciplines documentation that complies with the approved• The role and responsibility of specialty disciplines "design-to" baseline. Systems engineering audits the• The participation of specialty disciplines in SEMP Lessons Learned from DoD Experience system decomposition and definition • A well-managed project requires a coordinated• The role of specialty disciplines in verification and Systems Engineering Management Plan that is validation used through the project cycle.• Reliability • A SEMP is a living document that must be up-• Maintainability dated as the project changes and kept consistent• Quality assurance with the Project Plan. • A meaningful SEMP must• Integrated logistics be the product of ex-perts from all areas of the• Human engineering project.• Safety • Projects with little or insufficient systems engi-• Producibility neering discipline generally have major problems.• Survivability/vulnerability • Weak systems engineering, or systems engi-• Environmental assessment neering placed too low in the organization, cannot• Launch approval. perform the functions as required. • The systems engineering effort must be skillfully4.2.3 Development of the SEMP managed and well communicated to all projectThe SEMP must be developed concurrently with the participants.Project Plan. In developing the SEMP, the technical • The systems engineering effort must be respon-approach to the project, and hence the technical sive to both the customer and the contractor in- terests.
  • NASA Systems Engineering Handbook Page 26design and coding process and the design • Identifies the higher level element into which theengineering solutions for compliance to all higher WBS element will be integratedlevel baselines. In performing this responsibility, • Shows the cost account number of the element.systems engineering must ensure and documentrequirements traceability. A WBS should also have a companion WBS Systems engineering is also responsible for dictionary that contains each elements title,the overall management of the integration, identification number, objective, description, and anyverification, and validation process. In this role,systems engineering conducts Test ReadinessReviews and ensures that only verified configurationitems are integrated into the next higher assembly forfurther verification. Verification is continued to thesystem level, after which system validation isconducted to prove compliance with userrequirements. Systems engineering also ensures thatconcurrent engineering is properly applied through theproject life cycle by involving the required specialtyengineering disciplines. The SEMP is the guidingdocument for these activities.4.3 The Work Breakdown Structure A Work Breakdown Structure (WBS) is ahierarchical breakdown of the work necessary tocomplete a project. The WBS should be a product-based, hierarchical division of deliverable items andassociated services. As such, it should contain theprojects Product Breakdown Structure (PBS), with thespecified prime product(s) at the top, and thesystems, segments, subsystems, etc. at successivelower levels. At the lowest level are products such ashardware items, software items, and information items(documents, databases, etc.) for which there is acognizant engineer or manager. Branch points in thehierarchy should show how the PBS elements are tobe integrated. The WBS is built from the PBS byadding, at each branch point of the PBS, anynecessary service elements such as management,systems engineering, integration and verification(I&V), and integrated logistics support (ILS). If severalWBS elements require similar equipment or software,then a higher level WBS element might be defined toperform a block buy or a development activity (e.g.,"System Support Equipment"). Figure 10 shows therelationship between a .system. a PBS, and a WBS. A project WBS should be carried down to thecost account level appropriate to the risks to bemanaged. The appropriate level of detail for a costaccount is determined by managements desire tohave visibility into costs, balanced against the cost ofplanning and reporting. Contractors may have aContract WBS (CWBS), which is appropriate to the dependencies (e.g., receivables) on other WBScontractors needs to control costs. A summary elements. This dictionary provides a structured projectCWBS, consisting of the upper levels of the full description that is valuable for Figure 10 -- TheCWBS, is usually included in the project WBS to Relationship Between a System, a Productreport costs to the contracting organization. Breakdown Structure, and a Work Breakdown WBS elements should be identified by title Structure. orienting project members and otherand by a numbering system that performs the interested parties. It fully describes the productsfollowing functions: and/or services expected from each WBS element. This section provides some techniques for developing• Identifies the level of the WBS element a WBS, and points out some mistakes to avoid.
  • NASA Systems Engineering Handbook Page 27Appendix B.2 provides an example of a WBS for an Developing a successful project WBS isairborne telescope that follows the principles of likely to require several iterations through the projectproduct-based WBS development. life cycle since it is not always obvious at the outset what the full extent of the work may be. Prior to4.3.1 Role of the WBS developing a preliminary WBS, there should be some development of the system architecture to the point A product-based WBS is the organizing where a preliminary PBS can be created. The PBSstructure for: and associated WBS can then be developed level by level from the top down. In this approach, a project-• Project and technical planning and scheduling level system engineer finalizes the PBS at the project• Cost estimation and budget formulation. (In level, and provides a draft PBS for the next lower particular, costs collected in a product-based level. The WBS is then derived by adding appropriate WBS can be compared to historical data. This is services such as management and systems identified as a primary objective by DoD engineering to that lower level. This process is standards for WBSs.) repeated recursively until a WBS exists down to the• Defining the scope of statements of work and desired cost account level. specifications for contract efforts An alternative approach is to define all levels• Project status reporting, including schedule, cost, of a complete PBS in one design activity, and then workforce, technical performance, and integrated develop the complete WBS. When this approach is cost/schedule data (such as Earned Value and taken, it is necessary to take great care to develop the estimated cost at completion) PBS so that all products are included, and all assembly/integration and verification branches are• Plans, such as the SEMP, and other correct. The involvement of people who will be documentation products, such as specifications responsible for the lower level WBS elements is and drawings. recommended. It provides a logical outline and vocabulary that A WBS for a Multiple Delivery Project. There aredescribes the entire project, and integrates several terms for projects that provide multipleinformation in a consistent way. If there is a schedule deliveries, such as: rapid development, rapidslip in one element of a WBS, an observer can prototyping, and incremental delivery. Such projectsdetermine which other WBS elements are most likely should also have a product-based WBS, but there willto be affected. Cost impacts are more accurately be one extra level in the WBS hierarchy, immediatelyestimated. If there is a design change in one element under the final prime product(s), which identifies eachof the WBS, an observer can determine which other delivery. At any one point in time there will be bothWBS elements will most likely be affected, and these active and inactive elements in the WBS.elements can be consulted for potential adverseimpacts. A WBS for an Operational Facility. A WBS for managing an operational facility such as a flight4.3.2 Techniques for Developing the WBS operations center is analogous to a WBS for developing a system. The difference is that the
  • NASA Systems Engineering Handbook Page 28products in the PBS are not necessarily completed Scheduling is an essential component ofonce and then integrated, but are produced on a planning and managing the activities of a project. Theroutine basis. A PBS for an operational facility might process of creating a network schedule can lead to aconsist largely of information products or service much better understanding of what needs to be done,products provided to external customers. However, how long it will take, and how each element of thethe general concept of a hierarchical breakdown of project WBS might affect other elements. A completeproducts and/or services would still apply. network schedule can be used to calculate how long it The rules that apply to a development WBS will take to complete a project, which activitiesalso apply to a WBS for an operational facility. The determine that duration (i.e., critical path activities),techniques for developing a WBS for an operational and how much spare time (i.e., float) exists for all thefacility are the same, except that services such as other activities of the project. (See sidebar on criticalmaintenance and user support are added to the PBS, path and float calculation). An understanding of theand services such as systems engineering, projects schedule is a prerequisite for accurateintegration, and verification may not be needed. project budgeting. Keeping track of schedule progress is an4.3.3 Common Errors in Developing a WBS essential part of controlling the project, because cost and technical problems often show up first asThere are three common errors found in WBSs: schedule problems. Because network schedules show how each activity affects other activities, they• ·Error 1: The WBS describes functions, not prod- are essential for predicting the consequences of ucts. This makes the project manager the only schedule slips or accelerations of an activity on the one formally responsible for products. entire project. Network scheduling systems also help• ·Error 2: The WBS has branch points that are not managers accurately assess the impact of both consistent with how the WBS elements will be technical and resource changes on the cost and integrated. For instance, in a flight operations schedule of a project. system with a distributed architecture, there is typically software associated with hardware items 4.4.2 Network Schedule Data and Graphical that will be integrated and verified at lower levels Formats of a WBS. It would then be inappropriate to separate hardware and software as if they were Network schedule data consist of: separate systems to be integrated at the system • Activities level. This would make it difficult to assign • Dependencies between activities (e.g., where an accountability for integration and to identify the activity depends upon another activity for a costs of integrating and testing components of a receivable) system. • Products or milestones that occur as a result of• ·Error 3: The WBS is inconsistent with the PBS. one or more activities This makes it possible that the PBS will not be • Duration of each activity. fully implemented, and generally complicates the management process. A work flow diagram (WFD) is a graphical display of Some examples of these errors are shown in the first three data items above. A network scheduleFigure 11. Each one prevents the WBS from contains all four data items. When creating a networksuccessfully performing its roles in project planning schedule, graphical formats of these data are veryand organizing. These errors are avoided by using the useful. Two general types of graphical formats, shownWBS development techniques described above. in Figure 12, are used. One has activities-on-arrows, with products and dependencies at the beginning and4.4 Scheduling end of the arrow. This is the typical format of the Program Evaluation and Review Technique (PERT) Products described in the WBS are the result chart. The second, called precedence diagrams, hasof activities that take time to complete. An orderly and boxes that represent activities; dependencies are thenefficient systems engineering process requires that shown by arrows. Due to its simpler visual format andthese activities take place in a way that respects the reduced requirements on computer resources, theunderlying time precedence relationships among precedence diagram has become more common inthem. This is accomplished by creating a network recent years.schedule, which explicitly take s into account the The precedence diagram format allows fordependencies of each activity on other activities and simple depiction of the following logical relationships:receivables from outside sources. This section • Activity B begins when Activity A begins (Start-discusses the role of scheduling and the techniques Start, or SS)for building a complete network schedule. • Activity B begins only after Activity A ends (Fin- ish- Start, or FS)4.4.1 Role of Scheduling • Activity B ends when Activity A ends (Finish- Finish, or FF).
  • Critical Path and Float Calculation The critical path is the sequence of activities that will take the longest to accomplish. Activities thatNASA Systems Engineering Handbook Page are not on the critical path have a certain amount29 of time that they can be delayed until they, too are on a critical path. This time is called float. There are two types of float, path float and free float. Each of these three activity relationships may be Path float is where a sequence of activitiesmodified by attaching a lag (+ or-) to the relationship, collectively have float. If there is a delay in anas shown in Figure 12. activity in this sequence, then the path float for all It is possible to summarize a number of low-level subsequent activities is reduced by that amount.activities in a precedence diagram with a single Free float exists when a delay in an activity willactivity. This is commonly referred to as hammocking. have no effect on any other activity. For example,One takes the initial low-level activity, and attaches a if activity A can be finished in 2 days, and activitysummary activity to it using the first relationship B requires 5 days, and activity C requiresdescribed above. The summary activity is then completion of both A and B. then A would have 3attached to the final low-level activity using the third days of free float.relationship described above. Unless one is Float is valuable. Path float should behammocking, the most common relationship used in conserved where possible, so that a reserveprecedence diagrams is the second one mentioned exists for future activities. Conservation is muchabove. The activity-on-arrow format can represent the less important for free float.identical time-precedence logic as a precedence To determine the critical path, there is first adiagram by creating artificial events and activities as "forward pass" where the earliest start time ofneeded. each activity is calculated. The time when the last activity can be completed becomes the end point4.4.3 Establishing a Network Schedule for that schedule. Then there is a "backward pass", where the latest possible start point of each Scheduling begins with project-level activity is calculated, assuming that the lastschedule objectives for delivering the products activity ends at the end point previouslydescribed in the upper levels of the WBS. To develop calculated. Float is the time difference betweennetwork schedules that are consistent with the the earliest start time and the latest start time ofprojects objectives, the following six steps are applied an activity. Whenever this is zero, that activity isto each cost account at the lowest available level of on a critical path.the WBS. Step 1: Identify activities and dependenciesneeded to complete each WBS element. Enough • Ensuring that the cost account WBS is extendedactivities should be identified to show exact schedule downward to describe all significant products!,dependencies between activities and other WBS including documents, reports, hardware andelements. It is not uncommon to have about 100 software itemsactivities identified for the first year of a WBS element • For each product, listing the steps required for itsthat will require 10 work-years per year. Typically, generation and drawing the process as a workthere is more schedule detail for the current year, and flow diagrammuch less detail for subsequent years. Each year, • Indicating the dependencies among the products,schedules are updated with additional detail for the and any integration and verification steps within the work package. Step 2: Identify and negotiate external dependencies. External dependencies are any receivables from outside of the cost account, and any deliverables that go outside of the cost account. Informal negotiations should occur to ensure that there is agreement with respect to the content, format, and labeling of products that move across cost account boundaries. This step is designed to ensure that lower level schedules can be integrated. Step 3: Estimate durations of all activities. As-sumptions behind these estimates (workforce, availability of facilities, etc.) should be written down for future reference. Step 4: Enter the schedule data for the WBS element into a suitable computer program to obtain a network schedule and an estimate of the critical path for that element. (There are many commercially available software packages for this function.) This step enables the cognizant engineer, team leader,Figure 12—Activity-on-Arrow and Precedenceeasilycurrent year. This first step is most and/or system engineer to review the schedule logic.Diagrams for Network Schedules.accomplished by: It is not unusual at this point for some iteration of
  • NASA Systems Engineering Handbook Page 30steps 1 to 4 to be required in order to obtain a Good scheduling systems providesatisfactory schedule. Often too, reserve will be capabilities to show resource requirements over time,added to critical path activities, often in the form of a and to make adjustments so that the schedule isdummy activity, to ensure that schedule commitments feasible with respect to resource constraints overcan be met for this WBS element. time. Resources may include workforce level, funding Step 5: Integrate schedules of lower level profiles, important facilities, etc. Figure 14 shows anWBS elements, using suitable software, so that all example of an unleveled resource profile. Thedependencies between WBS elements are correctly objective is to move the start dates of tasks that haveincluded in a project network. It is important to include float to points where the resource profile is feasible. Ifthe impacts of holidays, weekends, etc. by this point. that is not sufficient, then the assumed task durationsThe critical path for the project is discovered at this for resource-intensive activities should be reexaminedstep in the process. and, accordingly, the resource levels changed. Step 6: Review the workforce level andfunding profile over time, and make a final set of 4.5 Budgeting and Resource Planningadjustments to logic and durations so that workforcelevels and funding levels are reasonable. Adjustments Budgeting and resource planning involvesto the logic and the durations of activities may be the establishment of a reasonable project baselineneeded to converge to the schedule targets budget, and the capability to analyze changes to thatestablished at the project level. This may include baseline resulting from technical and/or scheduleadding more activities to some WBS element, deleting changes. The projects WBS, baseline schedule, andredundant activities, increasing the workforce for budget should be viewed by the system engineer assome activities that are on the critical path, or finding mutually dependent, reflecting the technical content,ways to do more activities in parallel, rather than in time, and cost of meeting the projects goals andseries. If necessary, the project level targets may objectives.need to be adjusted, or the scope of the project may The budgeting process needs to take intoneed to be reviewed. Again, it is good practice to account whether a fixed cost cap or cost profile exists.have some schedule reserve, or float, as part of a risk When no such cap or profile exists, a baseline budgetmitigation strategy. is developed from the WBS and network schedule. The product of these last steps is a feasible This specifically involves combining the projectsbaseline schedule for each WBS element that is workforce and other resource needs with theconsistent with the activities of all other WBS appropriate workforce rates and other financial andelements, and the sum of all these schedules is programmatic factors to obtain cost elementconsistent with both the technical scope and the estimates. These elements of cost include:schedule goals for the project. There should beenough float in this integrated master schedule sothat schedule and associated cost risk are acceptableto the project and to the projects customer. Evenwhen this is done, time estimates for many WBSelements will have been underestimated, or work onsome WBS elements will not start as early as hadbeen originally assumed due to late arrival ofreceivables. Consequently, replanning is almostalways needed to meet the projects goals.4.4.4 Reporting Techniques Summary data about a schedule is usuallydescribed in Gantt charts. A good example of a Ganttchart is shown in Figure 13. (See sidebar on Ganttchart features.) Another type of output format is atable that shows the float and recent changes in floatof key activities. For example, a project manager maywish to know precisely how much schedule reservehas been consumed by critical path activities, andwhether reserves are being consumed or are beingpreserved in the latest reporting period. This tableprovides information on the rate of change of Figure 16 – Characterizing Risks by Likelihoodschedule re serve. and Severity.4.4.5 Resource Leveling
  • NASA Systems Engineering Handbook Page 31 Desirable Features in Gantt ChartsThe Gantt chart shown in Figure 13 (below) illustrates the following desirable features:• A heading that describes the WBS element, the responsible manager, the date of the baseline used, and the date that status was reported.• A milestone section in the main body (lines 1 and 2)• An activity section in the main body. Activity data shown includes: a. WBS elements (lines 3, 5, 8, 12, 16, and 20) b. Activities (indented from WBS elements) c. Current plan (shown as thick bars) d. Baseline plan (same as current plan, or if different, represented by thin bars under the thick bars) e. Status line at the appropriate date f. Slack for each activity (dashed lines above the current plan bars) g. Schedule slips from the baseline (dashed lines below the milestone on line 12)• A note section, where the symbols in the main body can be explained.•This Gantt chart shows only 23 lines, which is a summary of the activities currently being worked for this WBSelement. It is appropriate to tailor the amount of detail reported to those items most pertinent at the time of statusreporting. Figure 13 – An Example of a Gantt Chart.
  • NASA Systems Engineering Handbook Page 32 level, budgeting and resource planning must also ensure that an adequate level of contingency funds• Direct labor costs Overhead costs Other direct are included to deal with unforeseen events. Some costs (travel, data processing, etc.) Subcontract risk management methods are discussed in Section costs 4.6.• Material costs• General and administrative costs 4.6 Risk Management• Cost of money (i.e., interest payments, if applicable) Risk management comprises purposeful• Fee (if applicable) thought to the sources, magnitude, and mitigation of• Contingency. risk, and actions directed toward its balanced reduction. As such, risk management is an integral When there is a cost cap or a fixed cost part of project management, and contributes directlyprofile, there are additional logic gates that must be to the objectives of systems engineering. NASA policysatisfied before the system engineer can complete the objectives with regard to project risks are expressedbudgeting and planning process. A determination in NMI 8070.4A, Risk Management Policy. These areneeds to be made whether the WBS and network to:schedule are feasible with respect to mandated costcaps and/or cost profiles. If not, the system engineerneeds to recommend the best approaches for either • Provide a disciplined and documented approach tostretching out a project (usually at an increase in the risk management throughout the project life cycletotal cost), or descoping the projects goals and • Support management decision making by providingobjectives, requirements, design, and/or integrated risk assessments (i.e., taking into accountimplementation approach. (See sidebar on schedule cost, schedule, performance, and safety concerns)slippage.) • Communicate to NASA management the Whether a cost cap or fixed cost profile significance of assessed risk levels and the decisionsexists, it is important to control costs after they have made with respect to them.been baselined. An important aspect of cost control isproject cost and schedule status reporting and There are a number of actions the systemassessment, methods for which are discussed in engineer can take to effect these objectives. Principal among them is planning and completing a well- Assessing the Effect of Schedule Slippage conceived risk management program. Such a program encompasses several related activities Certain elements of cost, called fixed costs, are during the systems engineering process. The mainly time related, while others, called variable structure of these activities is shown in Figure 15. costs, are mainly product related. If a projects schedule is slipped, then the fixed costs of completing it increase. The variable costs remain Risk the same in total (excluding inflation adjustments), The term risk has different meanings depending but are deferred downstream, as in the figure on the context. Sometimes it simply indicates the below. To quickly assess the effect of a simple degree of l variability in the outcome or result of a schedule slippage: particular action. In the context of risk management during the systems engineering process, the term denotes a combination of both • Convert baseline budget plan from nominal the likelihood of various outcomes and their (real-year) dollars to constant dollars distinct consequences. The focus, moreover, is • Divide baseline budget plan into fixed and generally on undesired or unfavorable outcomes variable costs such as the risk of a technical failure, or the risk of • Enter schedule slip implementation exceeding a cost target. • Compute new variable costs including any work-free disruption costs • Repeat last two steps until an acceptable The first is planning the risk management imple-mentation is achieved program, which should be documented in a risk • Compute new fixed costs management program plan. That plan, which • Sum new fixed and variable costs elaborates on the SEMP, contains: • Convert from constant dollars to nominal (real-year) dollars. • The projects overall risk policy and objectives • The programmatic aspects, of the risk management activities (i.e., responsibilities, resources, schedules and milestones, etc.)Section 4.9.1 of this handbook. Another is cost and • A description of the methodologies, processes,schedule risk planning, such as developing risk and tools to be used for risk identification andavoidance and work-around strategies. At the project
  • NASA Systems Engineering Handbook Page 33 Figure 15 -- Risk Management Structure qualitative; hence in systems engineering literature, Diagram. this step is sometimes called qualitative risk assessment. The output of this step is a list of significant risks (by phase) to be given specific characterization, risk analysis, and risk mitigation management attention. and tracking In some projects, qualitative methods are• A description of the role of risk management with adequate for making risk management decisions; in respect to reliability analyses, formal reviews, others, these methods are not precise enough to and status reporting and assessment understand the magnitude of the problem, or to• Documentation requirements for each risk allocate scarce risk reduction resources. Risk analysis management product and action. is the process of quantifying both the likelihood of occurrence and consequences of potential future The level of risk management activities events (or "states of nature" in some texts). Theshould be consistent with the projects overall risk system engineer needs to decide whether riskpolicy established in conjunction with its NASA identification and characterization are adequate, orHeadquarters program office. At present, formal whether the increased precision of risk analysis isguidelines for the classification of projects with needed for some uncertainties. In making thatrespect to overall risk policy do not exist; such determination, the system engineer needs to balanceguidelines exist only for NASA payloads. These are the (usually) higher cost of risk analysis against thepromulgated in NMI 8010.1A, Classification of NASA value of the additional information.Pay-loads, Attachment A, which is reproduced asAppendix B.3. Risk mitigation is the formulation, selection, With the addition of data tables containing and execution of strategies designed to economicallythe results of the risk management activities, the risk reduce risk. When a specific risk is believed to bemanagement program plan grows into the projects intolerable, risk analysis and mitigation are oftenRisk Management Plan (RMP). These data tables performed iteratively, so that the effects of alternativeshould contain the projects identified significant risks. mitigation strategies can be actively explored beforeFor each such risk, these data tables should also one is chosen. Tracking the effectivity of thesecontain the relevant characterization and analysis strategies is closely allied with risk mitigation. Riskresults, and descriptions of the related mitigation andtracking plans (including any descope options and/orrequired technology developments). A sample RMPoutline is shown as Appendix B.4. The technical portion of risk managementbegins with the process of identifying andcharacterizing the projects risks. The objective of thisstep is to understand what uncertainties the projectfaces, and which among them should be givengreater attention. This is accomplished bycategorizing (in a consistent manner) uncertainties bytheir likelihood of occurrence (e.g., high, medium, orlow), and separately, according to the severity of theirconsequences. This categorization forms the basis forranking uncertainties by their relative riskiness.Uncertainties with both high likelihood and severelyadverse consequences are ranked higher than thosewithout these characteristics, as Figure 16 suggests.The primary methods used in this process are
  • NASA Systems Engineering Handbook Page 34 Management Guide wraps "funding, schedule, contract relations, and political risks" into the broader category of programmatic risks. While these terms are useful in informal discussions, there appears to be no formal taxonomy free of ambiguities. One reason, mentioned above, is that often one type of risk can be exchanged for another. A second reason is that some of these categories move together, as for example, cost risk and political risk (e.g., the risk of project cancellation). Another way some have categorized risk is by the degree of mathematical predictability in its underlying uncertainty. The distinction has been made between an uncertainty that has a known probabilitymitigation is often a challenge because efforts and distribution, with known or estimated parameters, andexpenditures to reduce one type of risk may increase one in which the underlying probability distribution isanother type. (Some have called this the systems either not known, or its parameters cannot beengineering equivalent of the Heisenberg Uncertainty objectively quantified.Principle in quantum mechanics.) The ability (or An example of the first kind of uncertaintynecessity) to trade one type of risk for another means occurs in the unpredictability of the spares upmassthat the project manager and the system engineer requirement for alternative Space Station Alphaneed to understand the system-wide effects of various designs. While the requirement is stochastic in anystrategies in order to make a rational allocation of particular logistics cycle, the probability distributionresources. can be estimated for each design from reliability Several techniques have been developed for theory and empirical data. Examples of the secondeach of these risk management activities. The kind of uncertainty occur in trying to predict whether aprincipal ones, which are shown in Table 1, are Shuttle accident will make resupply of Alphadiscussed in Sections 4.6.2 through 4.6.4. The impossible for a period of time greater than x months,system engineer needs to choose the techniques that or whether life on Mars exists. Modem subjectivistbest fit the unique requirements of each project. (also known as Bayesian) probability theory holds that A risk management program is needed the probability of an event is the degree of belief thatthroughout the project life cycle. In keeping with the a person has that it will occur, given his/her state ofdoctrine of successive refinement, its focus, however, information. As that information improves (e.g.,moves from the "big picture" in the early phases of the through the acquisition of data or experience), theproject life cycle (Phases A and B) to more specific subjectivists estimate of a probability shouldissues during design and development (Phases C and converge to that estimated as if the probabilityD). During operations (Phase E), the focus changes distribution were known. In the examples of theagain. A good risk management program is always previous paragraph, the only difference is theforward-looking. In other words, a risk management probability estimators perceived state of information.program should address the projects on-going risk Consequently, subjectivists find the distinctionissues and future uncertainties. As such, it is a natural between the two kinds of uncertainty of little or nopart of concurrent engineering. The RMP should be practical significance. The implication of theupdated throughout the project life cycle. subjectivists view for risk management is that, even with little or no data, the system engineers subjective4.6.1 Types of Risks probability estimates form a valid basis for risk decision making. There are several ways to describe thevarious types of risk a project manager/system 4.6.2 Risk Identification and Characterizationengineer faces. Traditionally, project managers and Techniquessystem engineers have attempted to divide risks intothree or four broad categories — namely, cost, A variety of techniques are available for riskschedule, technical, and, sometimes, safety (and/or identification and characterization. The thoroughnesshazard) risks. More recently, others have entered the with which this step is accomplished is an importantlexicon, including the categories of organizational, determinant of the risk management programsmanagement, acquisition, supportability, political, and success.programmatic risks. These newer categories reflectthe expanded set of concerns of project managers Expert Interviews. When properly conducted, expertand system engineers who must operate in the in-terviews can be a major source of insight andcurrent NASA environment. Some of these newer information on the projects risks in the experts areacategories also represent supersets of other of knowledge One key to a successful interview is incategories. For example, the Defense Systems identifying an ex pert who is close enough to a riskManagement College (DSMC) Systems Engineering issue to understand it thoroughly, and at the same
  • NASA Systems Engineering Handbook Page 35time, able (and willing) to step back and take an FMECAs, FMEAs, Digraphs, and Fault Trees.objective view of the probabilities and consequences. Failure Modes, Effects, and Criticality AnalysisA second key to success is advanced preparation on (FMECA), Failure Modes and Effects Analysisthe part of the interviewer. This means having a list of (FMEA), digraphs, and fault trees are specializedrisk issues to be covered in the interview, developing techniques for safety (and/or hazard) riska working knowledge of these issues as they apply to identification and characterization. These techniquesthe project, and developing methods for capturing the focus on the hardware components that make up theinformation acquired during the interview. system. According to MIL-STD-1629A, FMECA is "an Initial interviews may yield only qualitative ongoing procedure by which each potential failure in ainfor-mation, which should be verified in follow-up system is analyzed to determine the results or effectsrounds. Expert interviews are also used to solicit thereof on the system, and to classify each potentialquantitative data and information for those risk issues failure mode according to its severity." Failures arethat qualitatively rank high. These interviews are often generally classified into four seventy categories:the major source of inputs to risk analysis models builtusing the techniques described in Section 4.6.3. • Category I—Catastrophic failure (possible death or system loss)Independent Assessment. This technique can take • Category II—Critical failure (possible major in-several forms. In one form, it can be a review of jury or system damage)project documentation, such as Statements of Work, • Category III—Major failure (possible minor injuryacquisition plans, verification plans, manufacturing or mission effectiveness degradation)plans, and the SEMP. In another form, it can be an • Category IV — Minor failure (requires systemevaluation of the WBS for completeness and maintenance, but does not pose a hazard toconsistency with the projects schedules. In a third personnel or mission effectiveness).form, an independent assessment can be anindependent cost (and/or schedule) estimate from an A complete FMECA also includes anoutside organization. estimate of the probability of each potential failure. These prob-abilities are usually based, at first, onRisk Templates. This technique consists of subjective judgment or experience factors from similarexamining and then applying a series of previously kinds of hardware components, but may be refineddeveloped risk templates to a current project. Each from reliability data as the system developmenttemplate generally covers a particular risk issue, and progresses. An FMEA is similar to an FMECA, butthen describes methods for avoiding or reducing that typically there is less emphasis on the severityrisk. The most-widely recognized series of templates classification portion of the analysis. Digraph analysisappears in DoD 4245.7-M, Transition from is an aid in determining fault tolerance, propagation,Development to Production ...Solving the Risk and reliability in large, interconnected systems.Equation. Many of the risks and risk responses Digraphs exhibit a network structure and resemble adescribed are based on lessons reamed from DoD schematic diagram. The digraph technique permitsprograms, but are general enough to be useful to the integration of data from a number of individualNASA projects. As a general caution, risk templates FMECAs/FMEAs, and can be translated into faultcannot provide an exhaustive list of risk issues for trees, described in Section 6.2, if quantitativeevery project, but they are a useful input to risk probability estimates am needed.identification. 4.6.3 Risk Analysis TechniquesLessons Learned. A review of the lessons learnedfiles, data, and reports from previous similar projects The tools and techniques of risk analysis relycan produce insights and information for risk heavily on the concept and "laws" (actually, axiomsidentification on a new project. For technical risk and theorems) of probability. The system engineeridentification, as an example, it makes sense to needs to be familiar with these in order to appreciateexamine previous projects of similar function, the full power and limitations of these techniques. Thearchitecture, or technological approach. The lessons products of risk analyses are generally quantitativelearned from the Infrared Astronomical Satellite probability and consequence estimates for various(IRAS) project might be useful to the Space Infrared outcomes, more detailed understanding of theTelescope Facility (SIRTF) project, even though the dominant risks, and improved capability for allocatinglatters degree of complexity is significantly greater. risk reduction resources.The key to ap-plying this technique is in recognizingwhat aspects are analogous in two projects, and what Decision Analysis. Decision analysis is onedata are relevant to the new project. Even if the technique to help the individual decision maker dealdocumented lessons learned from previous projects with a complex set of uncertainties. Using the divide-are not applicable at the system level, there may be and-conquer approach common to much of systemsvaluable data applicable at the subsystem or engineering, a complex uncertainty is decomposedcomponent level. into simpler ones, which are then treated separately. The decomposition continues until it reaches a level
  • NASA Systems Engineering Handbook Page 36at which either hard information can be brought to Probabilistic Risk Assessment Pitfallsbear, or intuition can function effectively. Thedecomposition can be graphically represented as a Risk is generally defined in a probabilistic riskdecision tree. The branch points, called nodes, in a assess-ment (PRA) as the expected value of adecision tree represent either decision points or consequence function—that is:chance events. Endpoints of the tree are the potential R = Σ PS CS Soutcomes. (See the sidebar on a decision tree where PS is the probability of outcome s, and CS isexample for Mars exploration.) the consequence of outcome s. To attach In most applications of decision analysis, probabilities to outcomes, event trees and faultthese outcomes are generally assigned dollar values. trees are developed. These techniques have beenFrom the probabilities assigned at each chance node used since 1953, but by the late 1970s, they wereand the dollar value of each outcome, the distribution under attack by PRA practitioners. The reasonsof dollar values (i.e., consequences) can be derived include the following:for each set of decisions. Even large complexdecision trees can be represented in currentlyavailable decision analysis software. This software • Fault trees are limiting because a completecan also calculate a variety of risk measures. set of failures is not definable. In brief, decision analysis is a technique that • Common cause failures could not beallows: captured properly. An example of a common cause fail-ure is one where all the valves in a• A systematic enumeration of uncertainties and system have a defect so that their failures are encoding of their probabilities and outcomes not truly inde-pendent.• An explicit characterization of the decision • PRA results are sometimes sensitive to makers attitude toward risk, expressed in terms simple changes in event tree assumptions of his, her risk aversion • Stated criteria for accepting different kinds of• A calculation of the value of "perfect information," risks are often inconsistent, and therefore not thus setting a normative upper bound on appropriate for allocating risk reduction re- information-gathering expenditures sources.• Sensitivity testing on probability estimates and • Many risk-related decisions are driven by per- outcome dollar values. ceptions, not necessarily objective risk as defined by the above equation. PerceptionsProbabilistic Risk Assessment (PRA). A PRA of consequences tend to grow faster than theseeks to measure the risk inherent in a systems con-sequences themselves—that is, severaldesign and operation by quantifying both the small accidents are not perceived as stronglylikelihood of various possible accident sequences and as one large one, even if fatalities aretheir consequences. A typical PRA application is to identical.determine the risk associated with a specific nuclear • There are difficulties in dealing withpower plant. Within NASA, PRAs are used to incommen-surables, as for example, lives vs.demonstrate, for example, the relative safety of dollars.launching spacecraft containing RTGs (RadioisotopeThermoelectric Generators). American Nuclear Society and Institute of Electrical The search for accident sequences is and Electronic Engineers (IEEE).facilitated by event trees, which depict initiatingevents and combinations of system successes and Probabilistic Network Schedules. Probabilisticfailures, and fault trees, which depict ways in which network schedules, such as PERT (Programthe system failures represented in an event tree can Evaluation and Review Technique), permit theoccur. When integrated, an event tree and its duration of each activity to be treated as a randomassociated fault tree(s) can be used to calculate the variable. By supplying PERT with the minimum,probability of each accident sequence. The structure maximum, and most likely duration for each activity, aand mathematics of these trees is similar to that for probability distribution can be computed for projectdecision trees. The consequences of each accident completion time. This can then be used to determine,sequence are generally measured both in terms of for example, the chances that a project (or any set ofdirect economic losses and in public health effects. tasks in the network) will be completed by a given(See sidebar on PRA pitfalls.) date. In this probabilistic setting, however, a unique Doing a PRA is itself a major effort, requiring critical path may not exist. Some practitioners havea number of specialized skills other than those also cited difficulties in obtaining meaningful inputprovided by reliability engineers and human factors data for probabilistic network schedules. A simplerengineers. PRAs also require large amounts of alternative to a full probabilistic network schedule is tosystem design data at the component level, and perform a Monte Carlo simulation of activity durationsoperational procedures data. For additional along the projects critical path. (See Section 5.4.2.)information on PRAs, the system engineer canreference the PRA Procedures Guide (1983) by the
  • NASA Systems Engineering Handbook Page 37Probabilistic Cost and Effectiveness Models. further risk information gathering and assessments.)These models offer a probabilistic view of a projects Second, a risk can sometimes be shared with a co-cost and effectiveness outcomes. (Recall Figure 2.) participant—that is, with a international partner or aThis approach explicitly recognizes that single point contractor. In this situation, the goal is to reducevalues for these variables do not adequately NASAs risk independent of what happens to totalrepresent the risk conditions inherent in a project. risk, which may go up or down. There are many waysThese kinds of models are discussed more to share risks, particularly cost risks, with contractors.completely in Section 5.4. These include various incentive contracts and warranties. The third and fourth responses require4.6.4 Risk Mitigation and Tracking that additional specific planning and actions beTechniques undertaken. Typical technical risk mitigation actions Risk identification and characterization and include additional (and usually costly) testing ofrisk analysis provide a list of significant project risks subsystems and systems, designing in redundancy,that re-quire further management attention and/or and building a full engineering model. Typical cost riskaction. Because risk mitigation actions are generally mitigation actions include using off-the-shelf hardwarenot costless, the system engineer, in making and, according to Figure 6, providing sufficientrecommendations to the project manager, must funding during Phases A and B. Major supportabilitybalance the cost (in resources and time) of such risk mitigation actions include providing sufficientactions against their value to the project. Four initial spares to meet the systems availability goal andresponses to a specific risk are usually available: (1) a robust resupply capability (when transportation is adeliberately do nothing, and accept the risk, (2) share significant factor). For those risks that cannot bethe risk with a co-participant, (3) take preventive mitigated by a design or management approach, theaction to avoid or reduce the risk, and (4) plan for system engineer should recommend thecontingent action. establishment of reasonable financial and schedule The first response is to accept a specific risk contingencies, and technical margins.consciously. (This response can be accompanied by Whatever strategy is selected for a specific risk, it and its underlying rationale should be documented in a
  • NASA Systems Engineering Handbook Page 38risk mitigation plan, and its effectivity should be combined effects of likelihood and consequencestracked through the project life cycle, as required by are significant, should be given specificNMI 8070.4A. The techniques for choosing a management attention. Reviews conducted(preferred) risk mitigation strategy are discussed in throughout in the project life cycle should help toChapter 5, which deals with the larger role of trade force out risk issues.studies and system modeling in general. Some • Apply qualitative and quantitative techniques totechniques for planning and tracking are briefly understand the dominant risks and to improve thementioned here. allocation of risk reduction resources; this may include the development of project-specific riskWatchlists and Milestones. A watchlist is a analysis models such as decision trees andcompilation of specific risks, their projected PRAs.consequences, and early indicators of the start of the • Formulate and execute a strategy to handle eachproblem. The risks on the watchlist are those that risk, including establishment, where appropriate,were selected for management attention as a result of of reasonable financial and schedulecompleted risk management activities. A typical contingencies and technical marginswatchlist also shows for each specific risk a triggering • Track the effectivity of each risk mitigation strat-event or missed milestone (for example, a delay in the egy.delivery of long lead items), the related area of impact(production schedule), and the risk mitigation Good risk management requires a team effort -strategy, to be used in response. The watchlist is that is, system engineers and managers at all levelsperiodically reevaluated and items are added, of the project need to be involved. However, riskmodified, or deleted as appropriate. Should the management responsibilities must be assigned totriggering event occur, the projected consequences specific individuals. Successful risk managementshould be updated and the risk mitigation strategy practices often evolve into in stitutional policy.revised as needed. 4.7 Configuration ManagementContingency Planning, Descope Planning, andParallel Development. These techniques are Configuration management is the disciplinegenerally used in conjunction with a watchlist. The of identifying and formalizing the functional andfocus is on developing credible hedges and work- physical characteristics of a configuration item atarounds, which are activated upon a triggering event. discrete points in the product evolution for theTo be credible, hedges often require that additional purpose of maintaining the integrity of the productresources be expended, which provide a return only if system and controlling changes to the baseline. Thethe triggering event occurs. In this sense, these baseline for a project contains all of the technicaltechniques and resources act as a form of project requirements and related cost and scheduleinsurance. (The term contingency here should not be requirements that are sufficiently mature to beconfused with the use within NASA of the same term accepted and placed under change control by thefor project-held reserves.) NASA project manager. The project baseline consists of two parts: the technical baseline and the businessCritical Items/Issues Lists. A Critical Items/Issues baseline. The system engineer is responsible forList (CIL) is similar to a watchlist, and has been managing the technical baseline and ensuring that itextensively used on the Shuttle program to track is consistent with the costs and schedules in theitems with significant system safety consequences. business baseline. Typically, the project control officeAn example is shown as Appendix B.5. manages the business baseline. Configuration management requires the formal agreement of bothC/SCS and TPM Tracking. Two very important risk the buyer and the seller to proceed according to thetracking techniques—cost and schedule control up-to-date, documented project requirements (as theysystems (C/SCS) and Technical Performance exist at that phase in the project life cycle), and toMeasure (TPM) tracking—are discussed in Sections change the baseline requirements only by a formal4.9.1 and 4.9.2, respectively. configuration control process. The buyer might be a NASA pro gram office or an external funding agency.4.6.5 Risk Management: Summary For example, the buyer for the GOES project is NOAA, and the seller is the NASA GOES projectUncertainty is a fact of life in systems engineering. To office. management must be enforced at all levels; indeal with it effectively, the risk manager needs a the next level for this same example, the NASAdisciplined approach. In a project setting, a good- GOES project office is the buyer and the seller is thepractice approach includes efforts to: contractor, the Loral GOES project office. Configuration management is established through• Plan, document, and complete a risk program/project requirements documentation and, management program where applicable, through the contract Statement of• Identify and characterize risks for each phase of Work. the project; high risks, those for which the
  • NASA Systems Engineering Handbook Page 39 Configuration management is essential to sound. These early detailed studies and tests must beconduct an orderly development process, to enable documented and retained in the project archives, butthe modification of an existing design, and to provide they are not part of the technical baseline.for later replication of an existing design.Configuration management often provides the 4.7.2 Techniques of Configurationinformation needed to track the technical progress of Managementthe project since it manages the projectsconfiguration documentation. (See Section 4.9.2 on The techniques of configuration management includeTechnical Performance Measures.) The projects configuration (or baseline) identification, configurationapproach to configuration management and the control, configuration verification, and configurationmethods to be used should be documented in the accounting (see Figure 18).projects Configuration Management Plan. A sampleoutline for this plan is illustrated in Appendix B.6. Theplan should be tailored to each projects specificneeds and resources, and kept current for the entireproject life cycle.4.7.1 Baseline Evolution The project-level system engineer isresponsible for ensuring the completeness andtechnical integrity of the technical baseline. Thetechnical baseline includes:• Functional and performance requirements (or specifications) for hardware, software, information items, and processes• Interface requirements• Specialty engineering requirements •• Verification requirements• Data packages, documentation, and drawing trees Applicable engineering standards. The project baseline evolves in discrete steps through the project life cycle. An initial baseline may be established when the top-level user requirements expressed in the Mission Needs Statement are placed under configuration control. At each interphase control gate, increased technical detail is added to the maturing baseline. For a typical project, there are five sequential technical baselines:• Functional baseline at System Requirements Configuration Identification. Configuration Review (SRR) identification of a baseline is accomplished by• "Design-to" baseline at Preliminary Design creating and formally releasing documentation that Review (PDR) describes the baseline to be used, and how changes• "Build-to" (or "code-to") baseline at the Critical to that baseline will be accounted for, controlled, and Design Review (CDR) • "As-built" (or as-coded,,) released. Such documentation includes requirements baseline at the System Acceptance Review (product, process, and material), specifications, (SAR) drawings, and code listings. Configuration• "As-deployed" baseline at Operational Readiness documentation is not formally considered part of the Review (ORR). technical baseline until approved by control gate action of the buyer. The evolution of the five baselines is An important part of configurationillustrated in Figure 17. As discussed in Section 3.7.1, identification is the physical identification of individualonly decisions made along the core of the "vee" in configuration items using part numbers, serialFigure 7 are put under configuration control and numbers, lot numbers, version numbers, documentincluded in the approved baseline. Systems analysis, control numbers, etc.risk management, and development test activities (offthe core of the vee) must begin early and continue Configuration Control. Configuration control is thethroughout the decomposition process of the project process of controlling changes to any approvedlife cycle to prove that the core-level decisions are baseline by formal action of a configuration control board (CCB). This area of configuration management
  • NASA Systems Engineering Handbook Page 40is usually the most visible to the system engineer. In The system engineer should also ensure that thelarge programs/projects, configuration control is active approved baseline is communicated in a timelyaccomplished by a hierarchy of configuration control manner to all those relying on it. This communicationboards, reflecting multiple levels of control. Each keeps project teams apprised as to the distinctionconfiguration control board has its own areas of between what is frozen under formal change controlcontrol and responsibilities, which are specified in the and what can still be decided without configurationConfiguration Management Plan. control board approval. Typically, a configuration control board Configuration control is essential at both themeets to consider change requests to the business or contractor and NASA field center levels. Changestechnical baseline of the program/project. The determined to be Class I to the contractor must beprogram/project manager is usually the board chair, referred to the NASA project manager for resolution.who is the sole decision maker. The configuration This process is described in Figure 19. The use of amanager acts as the board secretary, who skillfully preliminary Engineering Change Proposal (ECP) toguides the process and records the official events of forewarn of an impending change provides the projectthe process. In a configuration control board forum, a manager with sufficient preliminary information tonumber of issues should be addressed: determine whether the contractor should spend NASA contract funds on a formal ECP. This technique is• What is the proposed change? designed to save significant contract dollars.• What is the reason for the change? Class 1 changes affect the approved baseline• What is the design impact? and hence the product version identification. Class 2• What is the effectiveness or performance impact? changes are editorial changes or internal changes not• What is the schedule impact? "visible" to the external interfaces. Class 2 changes• at is the risk of making the change? are dispositioned by the contractors CCB and do not• What is the impact on operations? require the NASA project managers approval. Overly formalized systems can become so• What is the impact to support equipment and burdensome that members of the project team may services?• What is the impact on spares requirements? Configuration Control Board Conduct• What is the effectivity of the change?• What documentation is affected by the change? Objective: To review evaluations, and then• Is the buyer supportive of the change? approve or disapprove proposed changes to the projects technical or business baselines. A review of this information should lead to a well- Participants: Project manager (chair), project-informed decision. When this information is not level system engineer, managers of each affectedavailable to the configuration control board, organization, configuration manager (secretary),unfounded decisions are made, often with negative presenters.consequences to the program or project. Format: Presenter covers recommended change Once a baseline is placed under configuration and discusses related system impact. Thecontrol, any change requires the approval of the presentation is reviewed by the system engineerconfiguration control board. The project manager for completeness prior to presentation.chairs the configuration control board, while the Decision: The CCB members discuss thesystem engineer or configuration manager is Change Request (CR) and formulate a decision.responsible for reviewing all material for Project manager agrees or overrides. Thecompleteness before it is presented to the board, and secretary prepares a CCB directive, which recordsfor ensuring that all affected organizations are and directs the CRs disposition.represented in the configuration control board forum.
  • NASA Systems Engineering Handbook Page 41try to circumvent the process. It is essential that the on two important examples: the Physicalformality of the change process be appropriately Configuration Audit and the Design Certificationtailored to the needs of each project. However, there Review.) Each of these serves to review andmust always be effective configuration control on challenge the data presented for conformance to theevery project. previously approved baseline. For software projects, it is routine to use version Configuration Accounting. Configuration accountingcontrol for both pre-release and post-release (sometimes called configuration status accounting) isdeliverable systems. It is equally important to maintain the task of maintaining, correlating, releasing,version con-for hardware-only systems. reporting, and storing configuration data. Essentially a Approved changes on a development project that data management function, configuration accountinghas only one deliverable obviously are only applicable ensures that official baseline data is retained,to that one deliverable item. However, for projects that available, and distribution-controlled for project use. Ithave multiple deliverables of "identical" design, also performs the important function of tracking thechanges may become effective on the second or status of each change from inception throughsubsequent production articles. In such a situation, implementation. A projects change status systemthe configuration control board must decide the should be capable of identifying each change by itseffectivity of the change, and the configuration control unique change identification number (e.g., ECRs,system must maintain version control and CRs, RlDs, waivers, deviations, modification kits) andidentification of the "as-built" configuration for each report its current status.article. Incremental implementation of changes iscommon in projects that have a deliberate policy of The Role of the Configuration Manager. Theintroducing product or process improvements. As an configuration manager is responsible for theexample, the original 1972 plan held that each of the application of these techniques. In doing so, theSpace Shuttle orbiters would be identical. In reality, configuration manager performs the followingeach of the orbiters is different, driven primarily by the functions:desire to achieve the original payload requirement of65,000 pounds. Proper version control documentation • Conceives and manages the configurationhas been essential to the sparing, fielding, and management system, and documents it in themaintenance of the operational fleet. Configuration Management PlanConfiguration Verification. Configuration verification • Acts as secretary of the configuration controlis the process of verifying that resulting products (e.g., board (controls the change approval process)hardware and software items) conform to the • Controls changes to baseline documentationintentions of the designers and to the standards Controls release of baseline documentationestablished by preceding approved baselines, and • Initiates configuration verification audits.that baseline documentation is current and accurate.Configuration verification is accomplished by two 4.7.3 Data Managementtypes of control gate activity: audits and technicalreviews. (See Section 4.8.4 for additional information
  • NASA Systems Engineering Handbook Page 42 For any project, proper data management is scheduled to communicate an approach, demonstrateessential for successful configuration management. an ability to meet requirements, or establish status.Before a project team can produce a tangible product, Reviews help to develop a better understandingit must produce descriptions of the system using among task or project participants, openwords, drawings, schematics, and numbers (i.e., communication channels, alert participants andsymbolic information). There are several vital management to problems, and open avenues forcharacteristics the symbolic information must have. solutions.First the information must be shareable. Whether it is The purpose of an audit is to provide NASAin electronic or paper form, the data must be readily management and its contractors a thoroughavailable, in the most recently approved version, to all examination of adherence to program/project policies,members of the project team. plans, requirements, and specifications. Audits are Second, symbolic information must be the systematic examination of tangible evidence todurable. This means that it must be recalled determine adequacy, validity, and effectiveness of theaccurately every time and represent the most current activity or documentation under review. An audit mayversion of the baseline. The baseline information examine documentation of policies and procedures,cannot change or degrade with repeated access of as well as verify adherence to them.the database or paper files, and cannot degrade with The purpose of a control gate is to provide atime. This is a non-trivial statement, since poor data scheduled event (either a review or an audit) thatmanagement practices (e.g., allowing someone to NASA management will use to make program orborrow the only copy of a document or drawing) can project go/no-go decisions. A control gate is aallow controlled information to become lost. Also, thematerial must be retained for the life of the Project Terminationprogram/project (and possibly beyond), and a It should be noted that project termination, whilecomplete set of documentation for each baseline usually disappointing to project personnel, may bechange must be retained. a proper reaction to changes in external Third, the symbolic information must be conditions or to an improved understanding of thetraceable upward and downward. A database must be systems projected cost-effectiveness.developed and maintained to show the parentage ofany requirement. The database must also be able to management event in the project life cycle that is ofdisplay all children derived from a given requirement. sufficient importance to be identified, defined, andFinally, traceability must be provided to reports that included in the project schedule. It re-quires formaldocument trade study results and other decisions that examination to evaluate project status and to obtainplayed a key role in the flowdown of requirements. approval to proceed to the next management eventThe data management function therefore according to the Program/Project Plan. -encompasses managing and archiving supportinganalyses and trade study data, and keeping them 4.8.2 General Principles for Reviewsconvenient for configuration management and generalproject use. Review Boards. The convening authority, which super-vises the manager of the activity being4.8 Reviews, Audits, and Control Gates reviewed, normally appoints the review board chair. Unless there are compelling technical reasons to theThe intent and policy for reviews, audits, and control contrary, the chair should not be directly associatedgates should be developed during Phase A and with the project or task under review. The conveningdefined in the Program/Project Plan. The specific authority also names the review board members. Theimplementation of these activities should be majority of the members should not be directlyconsistent with the types of reviews and audits associated with the program or project under review.described in this section, and with the NASAProgram/Project Life Cycle chart (see Figure 5) and Internal Reviews. During the course of a project orthe NASA Program/Project Life Cycle Process Flow task, it is necessary to conduct internal reviews thatchart (see Figure 8). However, the timing of reviews, present technical approaches, trade studies,audits, and control gates should be tailored to each analyses, and problem areas to a peer group forspecific project. evaluation and comment. The timing, participants, and content of these reviews is normally defined by4.8.1 Purpose and Definitions the project manager or the manager of the performing organization. Internal reviews are also held prior toThe purpose of a review is to furnish the forum and participation in a formal control gate review.process to provide NASA management and their Internal reviews provide an excellent meanscontractors assurance that the most satisfactory for controlling the technical progress of the project.approach, plan or design has been selected, that a They also should be used to ensure that all interestedconfiguration item has been produced to meet the parties are involved in the design and developmentspecified requirements, or that a configuration item is early on and throughout the process. Thus,ready. Reviews (technical or management) are representatives from areas such as manufacturing
  • NASA Systems Engineering Handbook Page 43and quality assurance should attend the internal chair submits, on a timely basis, a written report,reviews as active participants. They can then, for including recommendations for action, to theexample, ensure that the design is producible and convening authority with copies to the cognizantthat quality is managed through the project life cycle. managers. In addition, some organizations utilize a RedTeam. This is an internal, independent, peer-level Standing Review Boards. Standing review boardsreview conducted to identify any deficiencies in are selected for projects or tasks that have a highrequests for proposals, proposal responses, level of activity, visibility, and/or resourcedocumentation, or presentation material prior to its requirements. Selection of board members by therelease. The project or task manager is responsible convening authority is generally made from seniorfor establishing the Red Team membership and for field center technical and management staff.deciding which of their recommendations are to be Supporting members or advisors may be added to theimplemented. board as required by circumstances. If the review board is to function over the life of a project, it isReview Presentation Material. Presentations using advisable to select extra board members and rotateexisting documentation such as specifications, active assignments to cover needs.drawings, analyses, and reports may be adequate.Copies of any prepared materials (such as 4.8.3 Major Control Gatesviewgraphs) should be provided to the review boardand meeting attendees. Background information and This section describes the purpose, timing,review presentation material of use to board members objectives, success criteria, and results of the majorshould be distributed to the members early enough to control gates in the NASA project life cycle. Thisenable them to examine it prior to the review. For information is intended to provide guidance to projectmajor reviews, this time may be as long as 30 managers and system engineers, and to illustrate thecalendar days. progressive maturation of review activities and systems engineering products. The checklistsReview Conduct. All reviews should consist of oral provided below aid in the preparation of specificpresentations of the applicable project requirements review entry and exit criteria, but do not take theirand the approaches, plans, or designs that satisfy place. To minimize extra work, review material shouldthose requirements. These presentations normally are be keyed to project documentation.given by the cognizant design engineer or his/herimmediate supervisor. Mission Concept Review. It is highly recommended that in addition to Purpose—The Mission Concept Reviewthe review board, the review audience include project (MCR) affirms the mission need, and examines thepersonnel (NASA and contractor) not directly proposed missions objectives and the concept forassociated with the design being reviewed. This is meeting those objectives. It is an internal review thatrequired to utilize their cross-discipline expertise to usually occurs at the cognizant NASA field center.identify any design shortfalls or recommend design Timing—Near the completion of a missionimprovements. The review audience should also feasibility study.include non-project specialists in the area under Objectives—The objectives of the reviewreview, and specialists in production/fabrication, are to:testing, quality assurance, reliability, and safety.Some reviews may also require the presence of both • Demonstrate that mission objectives arethe contractors and NASAs contracting officers. complete and understandable Prior to and during the review, board • Confirm that the mission concepts demonstratemembers and review attendees may submit requests technical and programmatic feasibility of meetingfor action or engineering change requests (ECRs) that the mission objectivesdocument a concern, deficiency, or recommended • Confirm that the customers mission need is clearimprovement in the presented approach, plan, or and achievabledesign. Following the review, these are screened by • Ensure that prioritized evaluation criteria arethe review board to consolidate them, and to ensure provided for subsequent mission analysis.that the chair and cognizant manager(s) understandthe intent of the requests. It is the responsibility of the Criteria for Successful Completion —Thereview board to ensure that adequate closure following items compose a checklist to aid inresponses for each of the action requests are determining readiness of MCR product preparation:obtained. • Are the mission objectives clearly defined andPost Review Report. The review board chair has the stated? Are they unambiguous and internallyresponsibility to develop, where necessary, a consistent?consensus of the findings of the board, including an • Will satisfaction of the preliminary set ofassessment of the risks associated with problem requirements provide a system which will meetareas, and develop recommendations for action. The mission objectives?
  • NASA Systems Engineering Handbook Page 44• Is the mission feasible? Has there been a the remaining trades been identified to select the solution identified which is technically feasible? Is final system design? the rough cost estimate within an acceptable cost • Are the upper levels of the system PBS range? completely defined?• Have the concept evaluation criteria to be used in • Are the decisions made as a result of the trades candidate system evaluation been identified and consistent with the evaluation criteria established prioritized? at the MCR?• Has the need for the mission been clearly identi- • Has an optimal final design converged to a few fied? alternatives?• Are the cost and schedule estimates credible? • Have technology risks been identified and have• Was a technology search done to identify existing mitigation plans been developed? assets or products that could satisfy the mission or parts of the mission? Results of Review—A successful MDR supports the decision to further develop the system Results of Review—A successful MCR supports architecture/design and any technology needed tothe determination that the proposed mission meets accomplish the mission. The results reinforce thethe customer need, and has sufficient quality and missions merit and provide a basis for the systemmerit to support a field center management decision acquisition strategy.to propose further study to the cognizant NASAProgram Associate Administrator (PAA) as a System Definition Review.candidate Phase A effort. Purpose—The System Definition Review (SDR) examines the proposed systemMission Definition Review. architecture/design and the flowdown to all functional Purpose—The Mission Definition Review (MDR) elements of the system.examines the functional and performance Timing—Near the completion of the systemrequirements defined for the system and the definition stage. It represents the culmination ofpreliminary program/project plan, and assures that the efforts in system requirements analysis andrequirements and the selected architecture/design will allocation.satisfy the mission. Objectives—The objectives of the SDR are Timing—Near the completion of the mission to:definition stage. Objectives—The objectives of the review are to: • Demonstrate that the architecture/design is acceptable. that requirements allocation is• Establish that the allocation of the functional complete, and that a system that fulfills the system requirements is optimal for mission mission objectives can be built within the satisfaction with respect to requirements trades constraints posed and evaluation criteria that were internally • Ensure that a verification concept and preliminary established at MCR verification program are defined• Validate that system requirements meet mission • Establish end item acceptance criteria objectives • Identify technology risks and the • Ensure that adequate detailed information exists plans to mitigate those risks to support initiation of further development or• Present refined cost, schedule, and personnel acquisition efforts. resource estimates. Criteria for Successful Completion —The fol- Criteria for Successful Completion —The fol- lowing items compose a checklist to aid inlowing items compose a checklist to aid in determining readiness of SDR project preparation:determining readiness of MDR product preparation: • Will the top-level system design selected meet• Do the defined system requirements meet the the system requirements, satisfy the mission mission objectives expressed at the start of the objectives, and address operational needs? program/project? • Can the top-level system design selected be built• Are the system-level requirements complete, within cost constraints and in a timely manner? consistent, and verifiable? Have preliminary Are the cost and schedule estimates valid in view allocations been made to lower levels? of the system requirements and selected• Have the requirements trades converged on an architecture? Have all the system-level optimal set of system requirements? Do the requirements been allocated to one or more trades address program/project cost and lower levels? schedule constraints as wel1 as mission • Have the major design issues for the elements technical needs? Do the trades cover a broad and subsystems been identified? Have major risk spectrum of options? Have the trades identified areas been identified with mitigation plans? for this set of activities been completed? Have
  • NASA Systems Engineering Handbook Page 45• Have plans to control the development and • Are all Cl "design-to" specifications complete and design process been completed? ready for formal approval and release?• Is a development verification/test plan in place to • Has an acceptable operations concept been provide data for making informed design developed? decisions? Is the minimum end item product • Does the proposed design satisfy requirements performance documented in the acceptance critical to human safety and mission success? criteria? • Do the human factors considerations of the pro-• Is there sufficient information to support proposal posed design support the intended end users efforts? Is there a complete validated set of ability to operate the system and perform the requirements with sufficient system definition to mission effectively? support the cost and schedule estimates? • Have the production, verification, operations, and other specialty engineering organizations Results of Review—As a result of successful reviewed the design?completion of the SDR, the system and its operation • Is the proposed design producible? Have longare well enough understood to warrant design and lead items been considered?acquisition of the end items. Approved specifications • Do the specialty engineering program plans andfor the system, its segments, and preliminary design specifications provide sufficient guidance,specifications for the design of appropriate functional constraints, and system requirements for theelements may be released. A configuration design engineers to execute the design?management plan is established to control design and • Is the reliability analysis based on a soundrequirement changes. Plans to control and integrate methodology, and does it allow for realisticthe expanded technical process are in place. logistics planning and life-cycle cost analysis? • Are sufficient project reserves and schedule slackPreliminary Design Review. The Preliminary Design available to proceed further?Review (PDR) is not a single review but a number ofreviews that includes the system PDR and PDRs Results of Review — As a result ofconducted on specific Configuration Items ( CIs). successful completion of the PDR, the "design-to" Purpose—The PDR demonstrates that the baseline is ap-proved. It also authorizes the project topreliminary design meets all system requirements proceed to final design.with acceptable risk. It shows that the correct designoption has been selected, interfaces identified, and Critical Design Review. The Critical Design Reviewverification methods have been satisfactorily (CDR) is not a single review but a number of reviewsdescribed. It also establishes the basis for proceeding that start with specific Cls and end with the systemwith detailed design. CDR. Timing—After completing a full functional Purpose—The CDR discloses the completeimplementation. sys-tem design in full detail, ascertains that technical Objectives—The objectives of the PDR are problems and design anomalies have been resolved,to: and ensures that the design maturity justifies the• Ensure that all system requirements have been decision to initiate fabrication/manufacturing, allocated, the requirements are complete, and integration, and verification of mission hardware and the flowdown is adequate to verify system software. performance Timing—Near the completion of the final• Show that the proposed design is expected to design stage. meet the functional and performance Objectives—The objectives of the CDR are requirements at the Cl level to:• Show sufficient maturity in the proposed design approach to proceed to final design • Ensure that the "build-to" baseline contains de-• Show that the design is verifiable and that the tailed hardware and software specifications that risks have been identified, characterized, and can meet functional and performance mitigated where appropriate. requirements • Ensure that the design has been satisfactorily Criteria for Successful Completion —The audited by production, verification, operations,fol-lowing items compose a checklist to aid in and other specialty engineering organizationsdetermining readiness of PDR product preparation: • Ensure that the production processes and controls are sufficient to proceed to the• Can the proposed preliminary design be fabrication stage expected to meet all the requirements within the • Establish that planned Quality Assurance (QA) planned cost and schedule? activities will establish perceptive verification and• Have all external interfaces been identified? screening processes for producing a quality• Have all the system and segment requirements product been allocated down to the CI level?
  • NASA Systems Engineering Handbook Page 46• Verify that the final design fulfills the • Ensure that the system meets acceptance criteria specifications established at PDR. that were established at SDR • Establish that the system meets requirements Criteria for Successful Completion —The and will function properly in the expectedfollowing items compose a checklist to aid in operational environments as reflected in the testdetermining readiness of CDR product preparation: data, demonstrations, and analyses • Establish an understanding of the capabilities• Can the proposed final design be expected to and operational constraints of the as-built meet all the requirements within the planned cost system, and that the documentation delivered and schedule? with the system is complete and current.• Is the design complete? Are drawings ready to begin production? Is software product definition Criteria for Successful Completion —The sufficiently mature to start coding? fol-lowing items compose a checklist to aid in• Is the "build-to" baseline sufficiently traceable to determining readiness of SAR product preparation: assure that no orphan requirements exist?• Do the design qualification results from software • Are tests and analyses complete? Do they prototyping and engineering item testing, indicate that the system will function properly in simulation, and analysis support the conclusion the expected operational environments? that the system will meet requirements? • Does the system meet the criteria described in• Are all internal interfaces completely defined and the acceptance plans? compatible? Are external interfaces current? • Is the system ready to be delivered (flight items• Are integrated safety analyses complete? Do to the launch site and non-flight items to the they show that identified hazards have been intended operational facility for installation)? controlled, or have those remaining risks which • Is the system documentation complete and cannot be controlled been waived by the accurate? appropriate authority? • Is it clear what is being bought?• Are production plans in place and reasonable?• Are there adequate quality checks in the Results of Review — As a result of production process? successful completion of the SAR, the system is• Are the logistics support analyses adequate to accepted by the buyer, and authorization is given to identify integrated logistics support resource ship the hardware to the launch site or operational requirements? facility, and to install soft-ware and hardware for• Are comprehensive system integration and operational use. verifica tion plans complete? Flight Readiness Review. Results of Review — As a result of successful Purpose —The Flight Readiness Reviewcompletion of the CDR, the "build-to" baseline, (FRR) examines tests, demonstrations, analyses, andproduc-tion, and verification plans are approved. audits that determine the systems readiness for aApproved drawings are released and authorized for safe and successful launch and for subsequent flightfabrication. It also authorizes coding of deliverable operations. It also ensures that all flight and groundsoftware (according to the "build-to" baseline and hardware, software, personnel, and procedures arecoding standards presented in the review), and operationally ready.system qualification testing and integration. All open Timing—After the system has beenissues should be resolved with closure actions and configured for launch.schedules. Objectives—The objectives of the FRR are to:System Acceptance Review. Purpose—The System Acceptance Review • Receive certification that flight operations can(SAR) examines the system, its end items and safely proceed with acceptable riskdocumentation, and test data and analyses that • Confirm that the system and support elementssupport verification. It also ensures that the system are properly configured and ready for launchhas sufficient technical maturity to authorize its • Establish that all interfaces are compatible andshipment to and installation at the launch site or the function as expectedintended operational facility. • Can the proposed final design be expected to Timing—Near the completion of the system meet all the requirements within the planned costfabrication and integration stage. and schedule? Objectives—The objectives of the SAR are • Is the design complete? Are drawings ready toto: begin production? Is software product definition sufficiently mature to start coding?• Establish that the system is ready to be delivered • Is the "build-to" baseline sufficiently traceable to and accepted under DD-250 assure that no orphan requirements exist?
  • NASA Systems Engineering Handbook Page 47• Do the design qualification results from software • Are tests and analyses complete? Do they prototyping and engineering item testing, indicate that the system will function properly in simulation, and analysis support the conclusion the expected operational environments? that the system will meet requirements? • Does the system meet the criteria described in• Are all internal interfaces completely defined and the acceptance plans? compatible? Are external interfaces current? • Is the system ready to be delivered (flight items• Are integrated safety analyses complete? Do to the launch site and non-flight items to the they show that identified hazards have been intended operational facility for installation)? controlled, or have those remaining risks which • Is the system documentation complete and cannot be controlled been waived by the accurate? appropriate authority? • Is it clear what is being bought?• Are production plans in place and reasonable?• Are there adequate quality checks in the Results of Review — As a result of successful production process? completion of the SAR, the system is accepted by the• Are the logistics support analyses adequate to buyer, and authorization is given to ship the hardware identify integrated logistics support resource to the launch site or operational facility, and to install requirements? soft-ware and hardware for operational use.• Are comprehensive system integration and verification plans complete? Flight Readiness Review. Purpose — The Flight Readiness Review Results of Review — As a result of successful (FRR) examines tests, demonstrations, analyses, andcompletion of the CDR, the "build-to" baseline, audits that determine the systems readiness for aproduction, and verification plans are approved. safe and successful launch and for subsequent flightApproved drawings are released and authorized for operations. It also ensures that all flight and groundfabrication. It also authorizes coding of deliverable hardware, software, personnel, and procedures aresoftware (according to the "build-to" baseline and operationally ready.coding standards presented in the review), and Timing—After the system has beensystem qualification testing and integration. All open configured for launch.issues should be resolved with closure actions and Objectives—The objectives of the FRR areschedules. to:System Acceptance Review. • Receive certification that flight operations can Purpose—The System Acceptance Review safely proceed with acceptable risk(SAR) examines the system, its end items and • Confirm that the system and support elementsdocumentation, and test data and analyses that are properly configured and ready for launchsupport verification. It also ensures that the system • Establish that all interfaces are compatible andhas sufficient technical maturity to authorize its function as expectedshipment to and installation at the launch site or the • Establish that the system state supports a launchintended operational facility. go" decision based on go/no-go criteria. Timing—Near the completion of the systemfabrication and integration stage. Criteria for Successful Completion —The Objectives—The objectives of the SAR are following items compose a checklist to aid into: determining readiness of FRR product preparation:• Establish that the system is ready to be delivered • Ιs the launch vehicle ready for launch? • Is the and accepted under DD-250 space vehicle hardware ready for safe launch• Ensure that the system meets acceptance criteria and subsequent flight with a high probability for that were established at SDR achieving mission success?• Establish that the system meets requirements • Are all flight and ground software elements ready and will function properly in the expected to support launch and flight operations? operational environments as reflected in the test • Are all interfaces checked out and found to be data, demonstrations, and analyses functional?• Establish an understanding of the capabilities • Have all open items and waivers been examined and operational constraints of the as-built and found to be acceptable? system, and that the documentation delivered • Are the launch and recovery environmental with the system is complete and current. factors within constraints? Criteria for Successful Completion —The Results of Review — As a result offol-lowing items compose a checklist to aid in successful FRR completion, technical and proceduraldetermining readiness of SAR product preparation: maturity exists for system launch and flight
  • NASA Systems Engineering Handbook Page 48authorization. and in some cases initiation of system Timing—When major items within theoperations. system are no longer needed to complete the mission.Operational Readiness Review. Objectives—The objectives of the DR are Purpose — The Operational Readiness to:Review (ORR) examines the actual systemcharacteristics and the procedures used in its • Establish that the state of the mission and oroperation, and ensures that all flight and ground system requires decommissioning/disposal.hardware, software, personnel, procedures, and user Possibilities include no further mission need,documentation reflect the deployed state of the broken degraded system elements, or phase outsystem accurately. of existing system assets due to a pending Timing—When the system and its upgradeoperational and support equipment and personnel are • Demonstrate that the plans for decommissioning,ready to undertake the mission. disposal, and any transition are correct, current Objectives—The objectives of the ORR are and appropriate for current environmentalto: constraints and system configuration • Establish that resources are in place to support• Establish that the system is ready to transition disposal plans into an operational mode through examination of • Ensure that archival plans have been completed available ground and flight test results, analyses, for essential mission and project data. and operational demonstrations• Confirm that the system is operationally and logistically supported in a satisfactory manner Criteria for Successful Completion —The fol- considering all modes of operation and support lowing items compose a checklist to aid in (normal, contingency, and unplanned) determining readiness of DR product preparation:• Establish that operational documentation is complete and represents the system • Are reasons for decommissioning/disposal well configuration and its planned modes of operation documented?• Establish that the training function is in place and • Is the disposal plan completed and compliant has demonstrated capability to support all with local, state, and federal environmental aspects of system maintenance, preparation, regulations? operation, and recovery. • Does the disposal plan address the disposition of existing hardware, software, facilities, and Criteria for Successful Completion —The processes?following items compose a checklist to aid in • Have disposal risks been addressed?determining readiness of ORR product preparation: • Have data archival plans been defined?• Are the system hardware, software, personnel, • Are sufficient resources available to complete the and procedures in place to support operation? disposal plan?• Have all anomalies detected during prelaunch, • Is a personnel transition plan in place? launch, and orbital flight been resolved, docu- mented, and incorporated into existing Results of Review—A successful DR operational support data? completion assures that the decommissioning and• Are the changes necessary to transition the disposal of system items and processes are system from flight test to an operational appropriate and effective. configuration ready to be made?• Are all waivers closed? 4.8.4 Interim Reviews• Are the resources in place, or financially planned and approved to support the system during its Interim reviews are driven by programmatic operational lifetime? and/or NASA Headquarters milestones that are not necessarily supported by the major reviews. They are Results of Review — As a result of often multiple review processes that provide importantsuccessful ORR completion, the system is ready to information for major NASA reviews, programmaticassume normal operations and any potential hazards decisions, and commitments. Program/projectdue to launch or flight operations have been resolved tailoring dictates the need for and scheduling of thesethrough use of redundant design or changes in reviews.operational procedures. Requirements Reviews. Prior to the PDR, theDecommissioning Review. mission and system requirements must be thoroughly Purpose — The Decommissioning Review analyzed, allocated, and validated to assure that the(DR) confirms that the reasons for decommissioning project can effectively understand and satisfy theare valid and appropriate, and examines the currentsystem status and plans for disposal.
  • NASA Systems Engineering Handbook Page 49mission need. Specifically, these interim requirements Objectives—The objectives of the reviewreviews confirm whether: are to: • Confirm that the mission concept satisfies the• The proposed project supports a specific NASA customers needs program deficiency • Confirm that the mission requirements support• In-house or industry-initiated efforts should be identification of external and long-lead support employed in the program realization requirements (e.g., DoD, international, facility• The proposed requirements meet objectives resources)• The requirements will lead to a reasonable • Determine the adequacy of the analysis products solution to support development of the preliminary Phase• The conceptual approach and architecture are B approval package. credibly feasible and affordable. Criteria for Successful Completion—TheThese issues, as well as requirements ambiguities, following items compose a checklist to aid inare resolved or resolution actions are assigned. determining readiness of MRR product preparation:Interim requirements reviews alleviate the risk of Are the top-level mission requirements sufficientlyexcess design and analysis burdens too far into the defined to describe objectives in measurablelife cycle. parameters? Are assumptions and constraints defined and quantified?Safety Reviews. Safety reviews are conducted to • Is the mission and operations concept adequateensure compliance with NHB 1700.1B, NASA Safety to support preliminary program/projectPolicy and Requirements Document, and are documentation development, including theapproved by the program/project manager at the Engineering Master Plan/Schedule, Phase Brecommendation of the system safety manager. Their Project Definition Plan, technology assessment,purpose, objectives, and general schedule are initial Phase B/C/D resource requirements, andcontained in appropriate safety management plans. acquisition strategy development? Are evaluationSafety reviews address possible hazards associated criteria sufficiently defined?with system assembly, test, operation, and support. • Are measures of effectiveness established?Special consideration is given to possible operational • Are development and life-cycle cost estimatesand environmental hazards related to the use of realistic?nuclear and other toxic materials. (See Section 6.8.) • Have specific requirements been identified thatEarly reviews with field center safety personnel are high risk/high cost drivers, and have optionsshould be held to identify and understand any been described to relieve or mitigate them?problems areas, and to specify the requirements tocontrol them. Results of Review—Successful completion of the MRR provides confidence to submit informationSoftware Reviews. Software reviews are scheduled for the Preliminary Non-Advocate Review andby the program/project manager for the purpose of subsequent submission of the Mission Needsensuring that software specifications and associated Statement for approval.products are well understood by both program/projectand user personnel. Throughout the development System Requirements Review.cycle, the pedigree, maturity, limitations, and Purpose — The System Requirementsschedules of delivered preproduction items, as well as Review (SRR) demonstrates that the productthe Computer Software Configuration Items (CSCI), development team understands the mission (i.e.,are of critical importance to the projects engineering, project-level) and system-level requirements.operations, and verification organizations. Timing—Occurs (as required) following the formation of the team.Readiness Reviews. Readiness reviews are Objectives—The objectives of the reviewconducted prior to commencement of major events are to:that commit and expose critical program/projectresources to risk. These reviews define the risk • Confirm that the system-level requirements meetenvironment and address the capability to the mission objectivessatisfactorily operate in that environment. • Confirm that the system-level specifications of the system are sufficient to meet the projectMission Requirements Review. objectives. Purpose — The Mission RequirementsReview (MRR) examines and substantiates top-level Criteria for Successful Completion —Therequirements analysis products and assesses their following items compose a checklist to aid inreadiness for external review. determining readiness of SRR project preparation: Timing—Occurs (as required) following thematuration of the mission requirements in the missiondefinition stage.
  • NASA Systems Engineering Handbook Page 50• Are the allocations contained in the system Purpose — The Software Specification specifications sufficient to meet mission Review (SoSR) ensures that the software objectives? specification set is sufficiently mature to support• Are the evaluation criteria established and preliminary design efforts. realistic? Timing—Occurs shortly after the start of• Are measures of effectiveness established and preliminary design. realistic? Objectives—The review objectives are to:• Are cost estimates established and realistic?• Has a system verification concept been • Verify that all software requirements from the identified? system specification have been allocated to• Are appropriate plans being initiated to support CSCls and documented in the appropriate projected system development milestones? software specifications• Have the technology development issues been • Verify that a complete set of functional, identified along with approaches to their solution? performance, interface, and verification requirements for each CSCI has been developed Results of Review—Successful completion • Ensure that the software requirement set is bothof the SRR freezes program/project requirements and complete and understandable.leads to a formal decision by the cognizant ProgramAssociate Administrator (PAA) to proceed with Criteria for Successful Completion —Theproposal request preparations for project fol-lowing items comprise a checklist to aid inimplementation. determining the readiness of SoSR product preparation:System Safety Review. Purpose—System Safety Review(s) (SSR) • Are functional CSCI descriptions complete andpro-vides early identification of safety hazards, and clear?ensures that measures to eliminate, reduce, or control • Are the software requirements traceable to thethe risk associated with the hazard are identified and system specification?executed in a timely, cost-effective manner. • Are CSCI performance requirements complete Timing—Occurs (as needed) in multiple and unambiguous? Are execution time andphases of the project cycle. storage requirements realistic? Objectives—The objectives of the reviews • Is control and data flow between CSCIs defined?are to: • Are all software-to-software and software-to- hardware interfaces defined?• Identify those items considered as critical from a • Are the mission requirements of the system and safety viewpoint associated operational and support environments• Assess alternatives and recommendations to defined? Are milestone schedules and special mitigate or eliminate risks and hazards delivery requirements negotiated and complete?• Ensure that mitigation/elimination methods can • Are the CSCI specifications complete with be verified. respect to design constraints, standards, quality assurance, testability, and delivery preparation? Criteria for Successful Completion —Thefollowing items comprise a checklist to aid in Results of Review—Successful completiondetermining readiness of SSR product preparation: of the SoSR results in release of the software specifications based upon their development• Have the risks been identified, characterized, and requirements and guidelines, and the start of quantified if needed? preliminary design activities.• Have design/procedural options been analyzed, and quantified if needed to mitigate significant Test Readiness Review. Purpose—The Test risks? Readiness Review (TRR) ensures that the test article• Have verification methods been identified for hardware/software, test facility, ground support candidate options? personnel, and test procedures are ready for testing, and data acquisition, reduction, and control. Result of Review—A successful SSR Timing—Held prior to the start of a formalresults in the identification of hazards and their test. The TRR establishes a decision point to proceedcauses in the pro-posed design and operational with planned verification (qualification and/ormodes, and specific means of eliminating, reducing, acceptance) testing of CIs, subsystems, and/oror controlling the hazards. The methods of safety systems.verification will also be identified prior to PDR. At Objectives—The objectives of the reviewCDR, a safety baseline is developed. are to:Software Specification Review.
  • NASA Systems Engineering Handbook Page 51• Confirm that in-place test plans meet verification • Is the design certified? Have incomplete design requirements and specifications elements been identified?• Confirm that sufficient resources are allocated to • Have risks been identified and characterized. and the test effort mitigation efforts defined?• Examine detailed test procedures for • Has the bill of materials been reviewed and completeness and safety during test operations critical parts been identified?• Determine that critical test personnel are test-and • Have delivery schedules been verified? safety-certified • Have altemative sources been identified?• Confirm that test support software is adequate, • Have adequate spares been planned and pertinent, and verified. budgeted? • Are the facilities and tools sufficient for end item Criteria for Successful Completion —The production? Are special tools and test equipmentfol-lowing items comprise a checklist to aid in specified in proper quantities?determining the readiness of TRR product • Are personnel qualified?preparation: • Are drawings certified? • Is production engineering and planning mature• Have the test cases been reviewed and analyzed for cost-effective production? for expected results? Are results consistent with • Are production processes and methods test plans and objectives? consistent with quality requirements? Are they• Have the test procedures been "dry run"? Do compliant with occupational safety, they indicate satisfactory operation? environmental, and energy conservation• Have test personnel received training in test regulations? operations and safety procedures? Are they certified? Results of Review—A successful ProRR• Are resources available to adequately support results in certification of production readiness by the the planned tests as well as contingencies, project manager and involved specialty engineering including failed hardware replacement? organizations. All open issues should be resolved with• Has the test support software been demonstrated closure actions and schedules. to handle test configuration assignments, and data acquisition, reduction, control, and Design Certification Review. archiving? Purpose — The Design Certification Review (DCR) ensures that the qualification verifications Results of Review—A successful TRR demonstrated design compliance with functional andsignifies that test and safety engineers have certified performance requirements.that preparations are complete, and that the project Timing — Follows the system CDR, andmanager has authorized formal test initiation. after qualification tests and all modifications needed to implement qualification-caused corrective actionsProduction Readiness Review. have been completed. Purpose — The Production Readiness Objectives—The objectives of the reviewReview (ProRR) ensures that production plans, are to:facilities, and personnel are in place and ready tobegin production. • Confirm that the verification results met functional Timing—After design certification and prior and performance requirements, and that testto the start of production. plans and procedures were executed correctly in Objectives—The objectives of the review the specified environmentsare to: • Certify that traceability between test article and production article is correct, including name,• Ascertain that all significant production identification number, and current listing of all engineering problems encountered during waivers development are resolved • Identify any incremental tests required or• Ensure that the design documentation is conducted due to design or requirements adequate to support manufacturing/fabrication changes made since test initiation, and resolve• Ensure that production plans and preparations issues regarding their results. are adequate to begin manufacturing/fabrication• Establish that adequate resources have been Criteria for Successful Completion —The allocated to support end item production. fol-lowing items comprise a checklist to aid in determining the readiness of DCR product Criteria for Successful Completion —The preparation:following items comprise a checklist to aid indetermining the readiness of ProRR product • Are the pedigrees of the test articles directlypreparation: traceable to the production units?
  • NASA Systems Engineering Handbook Page 52• Is the verification plan used for this article current and approved?• Do the test procedures and environments used comply with those specified in the plan?• Are there any changes in the test article configuration or design resulting from the as-run tests? Do they require design or specification This loop is applicable at each level of the changes, and/or retests? project hierarchy. Planning data, status reporting• Have design and specification documents been data, and assessments flow up the hierarchy with audited? appropriate aggregation at each level; decisions• Do the verification results satisfy functional and cause actions to be taken down the hierarchy. performance requirements? Managers at each level determine (consistent with• Do the verification, design, and specification policies established at the next higher level of the documentation correlate? project hierarchy) how often, and in what form, reporting data and assessments should be made. In Results of Review—As a result of a establishing these status reporting and assessmentsuccessful DCR, the end item design is approved for requirements, some principles of good practice are: production. All open issues should be resolved withclosure actions and schedules. • Use an agreed-upon set of well-defined status reporting variables • Report these core variablesFunctional and Physical Configuration Audits. The in a consistent format at all project levelsPhysical Configuration Audit (also known as a • Maintain historical data for both trendconfiguration inspection) verifies that the physical identification and cross-project analysesconfiguration of the product corresponds to the "build- • Encourage a logical process of rolling up statusto" (or code-to") documentation previously approved reporting variables, (e.g., use the WBS forat the CDR. The Functional Configuration Audit obligations/costs status reporting and PBS forverifies that the acceptance test results are consistent mass status reporting)with the test requirements previously approved at the • Support assessments with quantitative riskPDR and CDR. It ensures that the test results indicate measuresperformance requirements were met, and test plans • Summarize the condition of the project by usingand procedures were executed correctly. It should color-coded (red, yellow, and green) alert zonesalso document differences between the test unit and for all core reporting variables.production unit, including any waivers. Regular, periodic (e.g., monthly) tracking of4.9 Status Reporting and Assessment the core status reporting variables is recommended, through some status reporting variables should beAn important part of systems engineering planning is tracked more often when there is rapid change ordetermining what is needed in time, resources, and cause for concern. Key reviews, such as PDRs andpeople to realize the system that meets the desired CDRs, are points at which status reporting measuresgoals and objectives. Planning functions, such as and their trends should be carefully scrutinized forWBS preparation, scheduling, and fiscal resource early warning signs of potential problems. Shouldrequirements planning, were discussed in Sections there be indications that existing trends, if allowed to4.3 through 4.5. Project management, however, does continue, will yield an unfavorable outcome,not end with planning; project managers need visibility replanning should begin as soon as practical.into the progress of those plans in order to exercise This section provides additional informationproper management control. This is the purpose of on status reporting and assessment techniques forthe status reporting and assessing processes. Status costs and schedules, technical performance, andreporting is the process of determining where the systems engineering process metrics.project stands in dimensions of interest such as cost,schedule, and technical performance. Assessing is 4.9.1 Cost and Schedule Control Measuresthe analytical process that converts the output of thereporting process into a more useful form for the Status reporting and assessment on costsproject manager -- namely, what are the future and schedules provides the project manager andimplications of current trends? Lastly, the manager system engineer visibility into how well the project ismust decide whether that future is acceptable, and tracking against its planned cost and schedulewhat changes, if any, in current plans are needed. targets. From a management point of view, achievingPlanning, status reporting, and assessing are systems these targets is on a par with meeting the technicalengineering and/or program control functions; performance requirements of the system. It is usefuldecision making is a management one. to think of cost and schedule status reporting and These processes together form the feedback assessment as measuring the performance of theloop depicted in Figure 20. This loop takes place on a "system that produces the system."continual basis throughout the project life cycle.
  • NASA Systems Engineering Handbook Page 53 NHB 9501.2B, Procedures for Contractor difference, BCWPt -BCWSt, is called the scheduleReporting of Correlated Cost and Performance Data, variance at time t.provides specific requirements for cost and schedule The Actual Cost of Work Performed (ACWPt)status reporting and assessment based on a projects is a third statistic representing the funds that havedollar value and period of performance Generally, the been expended up to time t on those WBS elements.NASA Form 533 series of reports is applicable to The difference between the budgeted and actualNASA cost-type (i.e., cost reimbursement and fixed- costs, BCWPt ACWPt, is called the cost variance atprice incentive) contracts. However, on larger time t. Such variances may indicate that the costcontracts (>$25M), which require Form 533P, NHB Estimate at Completion (EACt) of the project is9501.2B allows contractors to use their own reporting different from the budgeted cost. These types ofsystems in lieu of 533P reporting. The project variances enable a program analyst to estimate themanager/system engineer may choose to evaluate EAC at any point in the project life cycle. (See sidebarthe completeness and quality of these reporting on computing EAC.)systems against criteria established by the project If the cost and schedule baselines and themanager/system engineers own field center, or technical scope of the work are not fully integrated,against the DoDs Cost/Schedule Cost System then cost and schedule variances can still beCriteria (C/SCSC). The latter are widely accepted by calculated, but the incomplete linkage between costindustry and government, and a variety of tools exist data and schedule data makes it very difficult (orfor their implementation. impossible) to estimate the current cost EAC of the project. Control of Variances and the Role of the System Engineer. When negative variances are large enough to represent a significant erosion of reserves, then management attention is needed to either correct the variance, or to replan the project. It is important to establish levels of variance at which action is to be taken. These levels are generally lower when cost and schedule baselines do not support Earned Value calculations. The first action taken to control an excessive negative variance is to have the cognizant manager or system engineer investigate the problem, determine its cause, and recommend a solution. Computing the Estimate at Completion EAC can be estimated at any point in the project. The appropriate formula depends upon the reasons associated for any variances that may exist. If a variance exists due to a one-time event, such as an accident, then EAC = BUDGET +Assessment Methods. The traditional method of ACWP -BCWP where BUDGET is originalcost and schedule control is to compare baselined planned cost at completion. If a variance exists forcost and schedule plans against their actual values. In systemic reasons, such as a generalprogram control terminology, a difference between underestimate of schedule durations, or a steadyactual performance and planned costs or schedule redefinition of requirements, then the variance isstatus is called a variance. assumed to continue to grow over time, and the Figure 21 illustrates two kinds of variances equation is: EAC = BUDGET x (ACWP / BCWP).and some related concepts. A properly constructed If there is a growing number of liens,Work Breakdown Structure (WBS) divides the project action items, or significant problems that willwork into discrete tasks and products. Associated with increase the difficulty of future work, the EACeach task and product (at any level in the WBS) is a might grow at a greater rate than estimated by theschedule and a budgeted (i.e., planned) cost. The above equation. Such factors could be addressedBudgeted Cost of Work Scheduled (BCWSt) for any using risk management methods described inset of WBS elements is the budgeted cost of all work Section 4.6.on tasks and products in those elements scheduled to In a large project, a good EAC is thebe completed by time t. The Budgeted Cost of Work result of a variance analysis that may use of aPerformed (BCWPt) is a statistic representing actual combination of these estimation methods onperformance. BCWPt, also called Earned Value (EVt), different parts of the WBS. A rote formula shouldis the budgeted cost for tasks and products that have not be used as a substitute for understanding theactually been produced (completed or in progress) at underlying causes of variances.time t in the schedule for those WBS elements. The
  • NASA Systems Engineering Handbook Page 54There are a number of possible reasons why variance only to specific PBS elements). The system engineerproblems occur: needs to decide which generic and unique TPMs are worth tracking at each level of the PBS. The system• A receivable was late or was unsatisfactory for engineer should track the measure of system some reason effectiveness (when the project maintains such a• A task is technically very difficult and requires measure) and the principal performance or technical more resources than originally planned attributes that determine it, as top-level TPMs. At• Unforeseeable (and unlikely to repeat) events lower levels of the PBS, TPMs worth tracking can be occurred, such as illness, fire, or other calamity. identified through the functional and performance requirements levied on each individual system, Although the identification of variances is largely segment, etc. (See sidebar on high-level TPMs.)a program control function, there is an important In selecting TPMs, the system engineersystems engineering role in their control. That role should focus on those that can be objectivelyarises because the correct assessment of why a measured during the project life cycle. Thisnegative variance is occurring greatly increases the measurement can be done directly by testing, orchances of successful control actions. This indirectly by a combination of testing and analysis.assessment often requires an understanding of the Analyses are often the only means available tocost, schedule, and technical situation that can only determine some high-level TPMs such as systembe provided by the system engineer. reliability, but the data used in such analyses should be based on demonstrated values to the maximum4.9.2 Technical Performance Measures practical extent. These analyses can be performed using the same measurement methods or modelsStatus reporting and assessment of the systems used during trade studies. In TPM tracking, however,technical performance measures (TPMs) instead of using estimated (or desired) performancecomplements cost and schedule control. By tracking or technical attributes, the models are exercised usingthe systems TPMs, the project manager gains demonstrated values. As the project life cyclevisibility into whether the delivered system will actually proceeds through Phases C and D, the measurementmeet its performance specifications (requirements). of TPMs should become increasingly more accurateBeyond that, tracking TPMs ties together a number of because of the availability of more "actual" data aboutbasic systems engineering activities—that is, a TPM the system.tracking program forges a relationship amongsystems analysis, functional and performance Examples of High-Level TPMs for Planetary Spacecraft and Launch Vehiclesrequirements definition, and verification and validationactivities: • High-level technical performance measures ( TPMs) for planetary spacecraft include:• Systems analysis activities identify the key performance or technical attributes that • End-of-mission (EOM:) dry mass determine system effectiveness; trade studies • Injected mass (includes EOM dry mass, performed in systems analysis help quantify the baseline mission plus reserve propellant, systems performance requirements. other consumables and upper stage adaptor• Functional and performance requirements mass) definition activities help identify verification and • Consumables at EOM validation requirements. • Power demand (relative to supply)• Verification and validation activities result in • Onboard data processing memory demand quantitative evaluation of TPMs. • Onboard data processing throughput time• "Out-of-bounds" TPMs are signals to replan Onboard data bus capacity fiscal, schedule, and people resources; • Total pointing error. Mass and power sometimes new systems analysis activities need demands by spacecraft subsystems and to be initiated. science instruments may be tracked separately as well. For launch vehicles, high - Tracking TPMs can begin as soon as a level TPMs include:baseline design has been established, which can • Total vehicle mass at launchoccur early in Phase B. A TPM tracking program • Payload mass (at nominal altitude or orbit)should begin not later than the start of Phase C. Data • Payload volumeto support the full set of selected TPMs may, • Injection accuracyhowever, not be available until later in the project life • Launch reliabilitycycle. • In-flight reliability • For reusable vehicles, percent of valueSelecting TPMs. In general, TPMs can be generic recovered For expendable vehicles, unit(attributes that are meaningful to each Product production cost at the n th unit. (See sidebarBreakdown Structure (PBS) element, like mass or on Learning Curve Theory.)reliability) or unique (attributes that are meaningful
  • NASA Systems Engineering Handbook Page 55 Lastly, the system engineer should selectthose TPMs that must fall within well-defined(quantitative) limits for reasons of systemeffectiveness or mission feasibility. Usually theselimits represent either a firm upper or lower boundconstraint. A typical example of such a TPM for aspacecraft is its injected mass, which must notexceed the capability of the selected launch vehicle.Tracking injected mass as a high-level TPM is meantto ensure that this does not happen.Assessment Methods. The traditional method ofassessing a TPM is to establish a time-phasedplanned profile for it, and then to compare thedemonstrated value against that profile. The plannedprofile represents a nominal "trajectory" for that TPMtaking into account a number of factors. These factorsinclude the technological maturity of the system, theplanned schedule of tests and demonstrations, andany historical experience with similar or relatedsystems. As an example, spacecraft dry mass tendsto grow during Phases C and D by as much as 25 to30 percent. A planned profile for spacecraft dry massmay try to compensate for this growth with a lowerinitial value. The final value in the planned profileusually either intersects or is asymptotic to anallocated requirement (or specification). The plannedprofile method is the technical performancemeasurement counterpart to the Earned Valuemethod for cost and schedule control describedearlier. A closely related method of assessing aTPM relies on establishing a time-phased marginrequirement for it, and comparing the actual marginagainst that requirement. The margin is generallydefined as the difference between a TPMsdemonstrated value and its allocated requirement.The margin requirement may be expressed as apercentage of the allocated requirement. The marginrequirement generally declines through Phases C andD, reaching or approaching zero at their completion. Depending on which method is chosen, thesystem engineers role is to propose reasonableplanned profiles or margin requirements for approvalby the cognizant manager. The value of either ofthese methods is that they allow management byexception -- that is, only deviations from plannedprofiles or margins below requirements signalpotential future problems requiring replanning. If thisoccurs, then new cost, schedule, and/or technicalchanges should be proposed. Technical changes mayimply some new planned profiles. This is illustrated fora hypothetical TPM in Figure 22(a). In this example, a The margin management method ofsignificant demonstrated variance (i.e., unanticipated assessing is illustrated for the same example ingrowth) in the TPM during design and development of Figure 22(b). The replanning at time t occurred whenthe system resulted in replanning at time t. The the TPM fell significantly below the marginreplanning took the form of an increase in the allowed requirement. The new higher allocation for the TPMfinal value of the TPM (the "allocation"). A new resulted in a higher margin requirement, but it alsoplanned profile was then established to track the TPM immediately placed the margin in excess of thatover the remaining time of the TPM tracking program. requirement. Both of these methods recognize that the final value of the TPM being tracked is uncertain
  • NASA Systems Engineering Handbook Page 56 An Example of the Risk Management Method for distribution for the final TPM value. (See sidebar on Tracking Spacecraft Mass tracking spacecraft mass.) Early in the TPM tracking program, when the demonstrated value is based on During Phases C and D, a spacecrafts injected indirect means of estimation, this distribution typically mass can be considered an uncertain quantity. has a larger statistical variance than later, when it is Estimates of each subsystems and each based on measured data, such as a test result. When instruments mass fare however, made a TPM stays along its planned profile (or equivalently, periodically by the design engineers. These when its margin remains above the corresponding estimates change and become more accurate as margin requirement), the narrowing of the statistical actual parts and components are built and distribution should allow the TPM to remain in the integrated into subsystems and instruments. green alert zone (in Figure 22(c)) despite its growth. Injected mass can also change during Phases C The three methods represent different ways to assess and D as the quantity of propellant is fine-tuned to TPMs and communicate that information to meet the mission design requirements. Thus at management, but whichever is chosen, the pattern of each point during development, the spacecrafts success or failure should be the same for all three. injected mass is better represented as a probability distribution rather than as a single Relationship of TPM Tracking Program to the point. SEMP . The SEMP is the usual document for The mechanics of obtaining a probability describing the projects TPM tracking program. This distribution for injected mass typically involve description should include a master list of those TPMs making estimates of three points -- the lower and to be tracked, and the measurement and assessment upper bounds and the most likely injected mass methods to be employed. If analytical methods and value. These three values can be combined into models are used to measure certain high-level TPMs, parameters that completely define a probability then these need to be identified. The reporting distribution like the one shown in the figure below frequency and timing of assessments should be The launch vehicles "guaranteed" specified as well. In determining these, the system payload capability, designated the "LV engineer must balance the projects needs for Specification," is shown as a bold vertical line. accurate, timely, and effective TPM tracking against The area under the probability curve to the left of the cost of the TPM tracking program. The TPM the bold vertical line represents the probability that tracking program plan, which elaborates on the the spacecrafts injected mass will be less than or SEMP, should specify each TPMs allocation, time- equal to the launch vehicles payload capability. If phased planned profile or margin requirement, and injected mass is a TPM being tracked using the alert zones, as appropriate to the selected risk management method, this probability could be assessment method. plotted in a display similar to Figure 22(c). If this probability were nearly one, then 4.9.3 Systems Engineering Process Metrics the project manager might consider adding more objectives to the mission in order to take Status reporting and assessment of systems advantage of the "large margin" that appears to engineering process metrics provides additional exist. In the above figure, however, the probability visibility into the performance of the "system that is significantly less than one. Here, the project produces the system." As such, these metrics manager might consider descoping the project, for supplement the cost and schedule control measures example by removing an instrument or otherwise discussed in Section 4.9.1. changing mission objectives. The project manager Systems engineering process metrics try to could also solve the problem by requesting a quantify the effectiveness and productivity of the larger launch vehicle! systems engineering process and organization. Within a single project, tracking these metrics allows thethroughout most of Phases C and D. The margin system engineer to better understand the health andmanagement method attempts to deal with this progress of that project. Across projects (and overimplicitly by establishing a margin requirement that time), the tracking of systems engineering processreduces the chances of the final value exceeding its metrics allows for better estimation of the cost andallocation to a low number, for example five percent time of performing systems engineering functions. Itor less. A third method of reporting and assessing also allows the systems engineering organization todeals with this risk explicitly. demonstrate its commitment to the TQM principle of The risk management method is illustrated continuous improvement.for the same example in Figure 22(c). The replanningat time t occurred when the probability of the final Selecting Systems Engineering Process Metrics.TPM value being less than the allocation fell Generally, systems engineering process metrics fallprecipitously into the red alert zone. The new higher into three categories -- those that measure theallocation for the TPM resulted in a substantial progress of the systems engineering effort, those thatimprovement in that probability. The risk management measure the quality of that process, and those thatmethod requires an estimate of the probability measure its productivity. Different levels of systems
  • NASA Systems Engineering Handbook Page 57engineering management are generally interested in exist, the most common is the number of systemsdifferent metrics. For example, a project manager or engineering hours dedicated to a particular function orlead system engineer may focus on metrics dealing activity. Because not all systems engineering hourswith systems engineering staffing, project risk cost the same, an appropriate weighing schememanagement progress, and major trade study should be developed to ensure comparability of hoursprogress. A subsystem system engineer may focus across systems engineering personnel.on subsystem requirements and interface definition Displaying schedule-related metrics can beprogress and verification procedures progress. It is accomplished in a table or graph of planned quantitiesuseful for each system engineer to focus on just a few vs. actuals. With quality- and productivity-relatedprocess metrics. Which metrics should be tracked metrics, trends are generally more important thandepends on the system engineers role in the total isolated snapshots. The most useful kind ofsystems engineering effort. The systems engineering assessment method allows comparisons of the trendprocess metrics worth tracking also change as the on a current project with that for a successfullyproject moves through its life cycle. completed project of the same type. The latter Collecting and maintaining data on the provides a benchmark against which the systemsystems engineering process is not without cost. engineer can judge his/her own efforts.Status reporting and assessment of systemsengineering process metrics divert time and effortfrom the process itself. The system engineer mustbalance the value of each systems engineeringprocess metric against its collection cost. The value ofthese metrics arises from the insights they provideinto the process that cannot be obtained from costand schedule control measures alone. Over time,these metrics can also be a source of hardproductivity data, which are invaluable indemonstrating the potential returns from investment insystems engineering tools and training.Examples and Assessment Methods. Table 2 listssome systems engineering process metrics to beconsidered. This list is not intended to be exhaustive.Because some of these metrics allow for differentinterpretations, each NASA field center needs todefine them in a common-sense way that fits its ownprocesses. For example, each field center needs todetermine what it meant by a completed versus anapproved requirement, or whether these terms areeven relevant. As part of this definition, it is importantto recognize that not all requirements, for example,need be lumped together. It may be more useful totrack the same metric separately for each of severaldifferent types of requirements. Quality-related metrics should serve toindicate when a part of the systems engineeringprocess is overloaded and/or breaking down. Thesemetrics can be defined and tracked in severaldifferent ways. For example, requirements volatilitycan be quantified as the number of newly identifiedrequirements, or as the number of changes toalready-approved requirements. As another example,Engineering Change Request (ECR) processing couldbe tracked by comparing cumulative ECRs openedversus cumulative ECRs closed, or by plotting the ageprofile of open ECRs, or by examining the number ofECRs opened last month versus the total numberopen. The system engineer should apply his/her ownjudgment in picking the status reporting andassessment method. Productivity-related metrics provide anindication of systems engineering output per unit ofinput. Although more sophisticated measures of input
  • NASA Systems Engineering Handbook Page 58 systems engineering spiral described in Chapter 2.5 Systems Analysis and Modeling Issues This section discusses the steps of the process inThe role of systems analysis and modeling is to greater detail. Trade studies help to define theproduce rigorous and consistent evaluations so as to emerging system at each level of resolution. One keyfoster better decisions in the systems engineering message of this section is that to be effective, theprocess. By helping to progress the system design process requires the participation of many skills and atoward an optimum, systems analysis and modeling unity of effort to move toward an optimum systemcontribute to the objective of systems engineering. design.This is accomplished primarily by performing trade Figure 23 shows the trade study process instudies of plausible alternatives. The purpose of this simplest terms, beginning with the step of defining thechapter is to describe the trade study process, the systems goals and objectives, and identifying themethods used in trade studies to quantify system constraints it must meet. In the early phases of theeffectiveness and cost, and the pitfalls to avoid. project life cycle, the goals, objectives, and constraints are usually stated in general operational Systems Analysis terms. In later phases of the project life cycle, when the architecture and, perhaps, some aspects of the Gene Fisher defines systems analysis as "inquiry design have already been decided, the goals and to assist decision makers in choosing preferred objectives may be stated as performance future courses of action by (1) systematically requirements that a segment or subsystem must examining and reexamining the relevant meet. objectives, and alternative policies and strategies At each level of system resolution, the for achieving them; and (2) comparing system engineer needs to understand the full quantitatively where possible the economic costs, implications of the goals, objectives, and constraints effectiveness, and risks of the alternaternatives.” in order to formulate an appropriate system solution. This step is accomplished by performing a functional analysis. Functional analysis is the systematic process of identifying, describing, and relating the functions a system must perform in order to fulfil1 its goals and objectives. In the early phases of the5.1 The Trade Study Process project life cycle, the functional analysis deals with the top-level functions that need to be performed by theThe trade study process is a critical part of the
  • NASA Systems Engineering Handbook Page 59system, where they need to be performed, how often, measurement of system effectiveness, systemunder what operational concept and environmental performance or technical attributes, and system cost.conditions, and so on. The functional analysis needs Many different selection rules are possible.only to proceed to a level of decomposition that The selection rule in a particular trade study mayenables the trade study to define the system depend on the context in which the trade study isarchitecture. In later phases of the project life cycle, being conducted—in particular, what level of systemthe functional analysis proceeds to whatever level of design resolution is being addressed. At each level ofdecomposition is needed to fully define the system the system design, the selection rule generally shoulddesign and interfaces. (See sidebar on functional be chosen only after some guidance from the nextanalysis techniques.) higher level. The selection rule for trade studies at Closely related to defining the goals and lower levels of the system design should be inobjectives, and performing a functional analysis, is the consonance with the higher level selection rule.step of defining the measures and measurement Defining plausible alternatives is the step ofmethods for system effectiveness (when this is creating some alternatives that can potentiallypractical), system performance or technical attributes, achieve the goals and objectives of the system. Thisand system cost. (These variables are collectively step depends on understanding (to an appropriatelycalled outcome variables, in keeping with the detailed level) the systems functional requirementsdiscussion in Section 2.3. Some systems engineering and operational concept. Running an alternativebooks refer to these variables as decision criteria, but through an operational time line or reference missionthis term should not be confused with selection rule, Functional Analysis Techniquesdescribed below. Sections 5.2 and 5.3 discuss theconcepts of system cost and system effectiveness, • Functional analysis is the process ofrespectively, in greater detail.) This step begins the identifying, de-scribing, and relating theanalytical portion of the trade study process, since it functions a system must per-form in order tosuggests the involvement of those familiar with fulfill its goals and objectives. Functionalquantitative methods. analysis is logically structured as a top-down For each measure, it is important to address hierarchical decomposition of those functions,the question of how that quantitative measure will be and serves several important roles in thecom-puted— that is, which measurement method is to systems engineering process:be used. One reason for doing this is that this step • To draw out all the requirements the systemthen explicitly identifies those variables that are must meetimportant in meeting the systems goals and • To help identify measures for systemobjectives. effectiveness and its underlying performance Evaluating the likely outcomes of various or technical attributes at all levelsalternatives in terms of system effectiveness, theunderlying performance or technical attributes, and • To weed out from further consideration incost before actual fabrication and/or programming trade studies those alternatives that cannotusually requires the use of a mathematical model or meet the systems goals and objectivesseries of models of the system. So a second reason • To provide insights to the system-level (andfor specifying the measurement methods is that the below) model builders, whose mathematicalnecessary models can be identified. models will be used in trade studies to Sometimes these models are already evaluate the alternatives.available from previous projects of a similar nature;other times, they need to be developed. In the latter Several techniques are available to docase, defining the measurement methods should functional analysis. The primary functionaltrigger the necessary system modeling activities. analysis technique is the Functional Flow BlockSince the development of new models can take a Diagram (FFBD). These diagrams show theconsiderable amount of time and effort, early network of actions that lead to the fulfillment of aidentification is needed to ensure they will be ready function. Although the FFBD network shows thefor formal use in trade studies. logical sequence of “what" must happen, it does Defining the selection rule is the step of not ascribe a time duration to functions orexplicitly determining how the outcome variables will between functions. To understand time-criticalbe used to make a (tentative) selection of the requirements, a Time Line Analysis (TLA) is used.preferred alternative. As an example, a selection rule A TLA can be applied to such diverse operationalmay be to choose the alternative with the highest functions as spacecraft command sequencing andestimated system effectiveness that costs less than x launch vehicle processing. A third technique is thedollars (with some given probability), meets safety N 2 diagram, which is a matrix display of functionalrequirements, and possibly meets other political or interactions, or data flows, at a particularschedule constraints. Defining the selection rule is hierarchical level. Appendix B.7 provides furtheressentially deciding how the selection is to be made. discussion and examples of each of theseThis step is independent from the actual techniques.
  • NASA Systems Engineering Handbook Page 60is a useful way of determining whether it can plausibly In an ideal world, all input values would befulfill these requirements. (Sometimes it is necessary pre-cisely known, and models would perfectly predictto create separate behavioral models to determine outcome variables. This not being the case, thehow the system reacts when a certain stimulus or system engineer should supplement point estimatescontrol is applied, or a certain environment is of the outcome variables for each alternative withencountered. This provides insights into whether it computed or estimated uncertainty ranges. For eachcan plausibly fulfill time-critical and safety uncertain key input, a range of values should berequirements.) Defining plausible alternatives also estimated. Using this range of input values, therequires an understanding of the technologies sensitivity of the outcome variables can be gauged,available, or potentially available, at the time the and their uncertainty ranges calculated. The systemsystem is needed. Each plausible alternative should engineer may be able to obtain meaningful probabilitybe documented qualitatively in a description sheet. distributions for the outcome variables using MonteThe format of the description sheet should, at a Carlo simulation (see Section 5.4.2), but when this isminimum, clarify the allocation of required system not feasible, the system engineer must be contentfunctions to that alternatives lower-level architectural with only ranges and sensitivities.or design components (e.g.. subsystems). This essentially completes the analytical One way to represent the trade study portion of the trade study process. The next steps canalternatives under consideration is by a trade tree. be described as the judgmental portion. CombiningDuring Phase A trade studies, the trade tree should the selection rule with the results of the analyticalcontain a number of alternative high-level system activity should enable the system engineer to arrayarchitectures to avoid a premature focus on a single the alternatives from most preferred to least, inone. As the systems engineering process proceeds, essence making a tentative selection.branches of the trade tree containing unattractive This tentative selection should not bealternatives will be "pruned," and greater detail in accepted blindly. In most trade studies, there is aterms of system design will be added to those need to subject the results to a "reality check" bybranches that merit further attention. The process of considering a number of questions. Have the goals,pruning unat-tractive early alternatives is sometimes objectives, and constraints truly been met? Is theknown as doing "killer trades." (See sidebar on trade tentative selection heavily dependent on a particulartrees.) set of input values to the measurement methods, or Given a set of plausible alternatives, the next does it hold up under a range of reasonable inputstep is to collect data on each to support the values? (In the latter case, the tentative selection isevaluation of the measures by the selected said to be robust.) Are there sufficient data to back upmeasurement methods. If models are to be used to the tentative selection? Are the measurementcalculate some of these measures, then obtaining the methods sufficiently discriminating to be sure that themodel inputs provides some impetus and direction to tentative selection is really better than otherthe data collection activity. By providing data, alternatives? Have the subjective aspects of theengineers in such disciplines as reliability, problem been fully addressed?maintainability, producibility, integrated logistics, If the answers support the tentativesoftware, testing, operations, and costing have an selection, then the system engineer can have greaterimportant supporting role in trade studies. The data confidence in a recommendation to proceed to acollection activity, however, should be orchestrated by further resolution of the system design, or to thethe system engineer. The results of this step should implementation of that design. The estimates ofbe a quantitative description of each alternative to system effectiveness, its underlying performance oraccompany the qualitative. technical attributes, and system cost generated during Test results on each alternative can be the trade study process serve as inputs to that furtherespecially useful. Early in the systems engineering resolution. The analytical portion of the trade studyprocess, performance and technical attributes are process often provide the means to quantify thegenerally uncertain and must be estimated. Data from performance or technical (and cost) attributes that thebreadboard and brassboard testbeds can provide systems lower levels must meet. These can beadditional confidence that the range of values used as formalized as performance requirements.model inputs is correct. Such confidence is also If the reality check is not met, the trade studyenhanced by drawing on data collected on related process returns to one or more earlier steps. Thispre-viously developed systems. iteration may result in a change in the goals, The next step in the trade study process is to objectives, and constraints, a new alternative, or aquantify the outcome variables by computing change in the selection rule, based on the newestimates of system effectiveness, its underlying information generated during the trade study. Thesystem performance or technical attributes, and reality check may, at times, lead instead to a decisionsystem cost. If the needed data have been collected, to first improve the measures and measurementand the measurement methods (for example, models) methods (e.g., models) used in evaluating theare in place, then this step is, in theory, mechanical. alternatives, and then to repeat the analytical portionIn practice, considerable skill is often needed to get of the trade study process.meaningful results.
  • NASA Systems Engineering Handbook Page 61 5.1.1 Controlling the Trade Study Process resources available to do the study because the work required in defining additional alternatives and There are a number of mechanisms for obtaining the necessary data on them can be controlling the trade study process. The most considerable. However, focusing on too few or too important one is the Systems Engineering similar alternatives defeats the purpose of the trade Management Plan (SEMP). The SEMP specifies the study process. major trade studies that are to be performed during A fourth mechanism for controlling the trade each phase of the project life cycle. It should also study process can be exercised through the use (and spell out the general contents of trade study reports, misuse) of models. Lastly, the choice of the selection which form part of the decision support packages (i.e., rule exerts a considerable influence on the results of documentation submitted in conjunction with formal the trade study process. These last two issues are reviews and change requests). discussed in Sections 5.1.2 and 5.1.3, respectively. A second mechanism for controlling the trade study process is the selection of the study team 5.1.2 Using Models leaders and members. Because doing trade studies is part art and part science, the composition and Models play important and diverse roles in systems experience of the teams is an important determinant engineering. A model can be defined in several ways, of the studys ultimate usefulness. A useful technique including: to avoid premature focus on a specific technical designs is to include in the study team individuals with • An abstraction of reality designed to answer differing technology backgrounds. specific questions about the real world Another mechanism is limiting the number of • An imitation, analogue, or representation of a real alternatives that are to be carried through the study. world process or structure; or This number is usually determined by the time and • A conceptual, mathematical, or physical tool to An Example of a Trade Tree for a Mars RoverThe figure below shows part of a trade tree for a robotic Mars rover system, whose goal is to find a suitable mannedlanding site. Each layer represents some aspect of the system that needs to be treated in a trade study to determine thebest alternative. Some alternatives have been eliminated a priori because of technical feasibility, launch vehicleconstraints, etc. The total number of alternatives is given by the number of end points of the tree. Even with just a fewlayers, the number of alternatives can increase quickly. (This tree has already been pruned to eliminate low-autonomy,large rovers.) As the systems engineering process proceeds, branches of the tree with unfavorable trade study outcomesare discarded. The remaining branches are further developed by identifying more detailed trade studies that need to bemade. A whole family of (implicit) alternatives can be represented in a trade tree by a continuous variable. In this example,rover speed or range might be so represented. By treating a variable this way, mathematical optimization techniques canbe applied. Note that a trade tree is, in essence, a decision tree without chance nodes. (See the sidebar on decision trees.)
  • NASA Systems Engineering Handbook Page 62 Trade Study Reports assist a decision maker. Trade study reports should be prepared for each trade study. At a minimum, each trade study Together, these definitions are broad enough report should identify:to encompass physical engineering models used inthe verification of a system design, as well as • The system issue under analysisschematic models like a functional flow block diagram • System goals and objectives (orand mathematical (i.e., quantitative) models used in requirements, as appropriate to the level ofthe trade study process. This section focuses on the resolution), and constraintslast. • The measures and measurement methods The main reason for using mathematical (models) usedmodels in trade studies is to provide estimates of • All data sources usedsystem effectiveness, performance or technical • The alternatives chosen for analysisattributes, and cost from a set of known or estimable • The computational results, includingquantities. Typically, a collection of separate models uncertainty ranges and sensitivity analysesis needed to provide all of these outcome variables. performedThe heart of any mathematical model is a set of • The selection rule usedmeaningful quantitative relationships among its inputsand outputs. These relationships can be as simple as • The recommended alternative.adding up constituent quantities to obtain a total, or ascomplex as a set of differential equations describing Trade study reports should be maintained asthe trajectory of a spacecraft in a gravitational field. part of the system archives so as to ensureIdeally, the relationships express causality, not just traceability of decisions made through thecorrelation. systems engineering process. Using a generally consistent format for these reports also makes itTypes of Models. There are a number of ways easier to review and assimilate them into themathematical models can be usefully categorized. formal change control process.One way is according to its purpose in the trade studyprocess—that is, what system issue and what level of (usually the systems behavior under differentdetail the model addresses, and with which outcome assumptions) using the sheer computational power ofvariable or variables the model primarily deals. Other current computers.commonly used ways of categorizing mathematical Linear, nonlinear, integer and dynamic program-models focus on specific model attributes such as ming models are another important class of models inwhether a model is: trade studies because they can optimize an objective function representing an important outcome variable• Static or dynamic (for example, system effectiveness) for a whole class• Deterministic or probabilistic (also called of implied alternatives. Their power is best applied in stochastic) situations where the systems objective function and• Descriptive or optimizing. constraints are well understood, and these constraints can be written as a set of equalities and inequalities. These terms allow model builders and modelusers to enter into a dialogue with each other about Pitfalls in Using Models. Models always embody as-the type of model used in a particular analysis or sumptions about the real world they purport totrade study. No hierarchy is implied in the above list; represent, and they always leave something out.none of the above dichotomous categorizations Moreover, they are usually capable of producingstands above the others. highly accurate results only when they are addressing Another taxonomy can be based on the degree of rigorously quantifiable questions in which theanalytic tractability. At one extreme on this scale, an "physics" is well understood as, for example, a load"analytic" model allows a closed-form solution for a dynamics analysis or a circuit analysis.out-come variable of interest as a function of the In dealing with system issues at the topmodel inputs. At the other extreme, quantification of a level, however, this is seldom the case. There is oftenoutcome variable of interest is at best ordinal, while in a significant difference between the substantivethe middle are many forms of mathematical simulation system cost-effectiveness issues and questions, andmodels. the questions that are mathematically tractable from a Mathematical simulations are a particularly useful modeling perspective. For example, thetype of model in trade studies. These kinds of models program/project manager may ask: "Whats the besthave been successfully used in dealing quantitatively space station we can build in the current budgetarywith large complex systems problems in environment?" The system engineer may try to dealmanufacturing, transportation, and logistics. with that question by translating it into: "For a fewSimulation models are used for these problems plausible station designs, what does each provide itsbecause it is not possible to "solve" the systems users, and how much does each cost?" When theequations analytically to obtain a closed-form solution, system engineer then turns to a model (or models) foryet it is relatively easy to obtain the desired results answers, the results may only be some approximate
  • NASA Systems Engineering Handbook Page 63costs and some user resource measures based on a predictions. A history of successful predictions lendsfew engineering relationships. The model has failed to credibility to a model, but full validation—proof thatadequately address even the system engineers more the models prediction is in accord with reality—is verylimited question, much less the program/project difficult to attain since observational evidence onmanagers. Compounding this sense of model those predictions is generally very scarce. While it isincompleteness is the recognition that the models certainly advantageous to use tried-and-true modelsrelationships are often chosen for their mathematical that are often left as the legacy of previous projects,convenience, rather than a demonstrated empirical this is not always possible. Systems that address newvalidity. Under this situation, the model may produce problems often require that new models be developedinsights, but it cannot provide definitive answers to the for their trade studies. In that case, full validation issubstantive questions on its own. Often too, the out of the question, and the system engineer must besystem engineer must make an engineering content with models that have logical consistency andinterpretation of model results and convey them to the some limited form of outside, independent cor-project manager or other decision maker in a way that roboration.captures the essence of the original question. Responsiveness of a model is a measure of As mentioned earlier, large complex its power to distinguish among the differentproblems often require multiple models to deal with alternatives being considered in a trade study. Adifferent aspects of evaluating alternative system responsive lunar base cost model, for example,architectures (and designs). It is not unusual to have should be able to distinguish the costs associatedseparate models to deal with costs and effectiveness, with different system architectures or designs,or to have a hierarchy of models—i.e., models to deal operations concepts, or logistics strategies.with lower level engineering issues that provide useful Another desirable model characteristic isresults to system-level mathematical models. This transparency, which occurs when the modelssituation itself can have built-in pitfalls. mathematical relationships, algorithms, parameters, One such pitfall is that there is no guarantee supporting data, and inner workings are open to thethat all of the models work together the way the user. The benefit of this visibility is in the traceabilitysystem engineer intends or needs. One submodels of the models results. Not everyone may agree withspecialized assumptions may not be consistent with the results, but at least they know how they werethe larger model it feeds. Optimization at the derived. Transparency also aids in the acceptancesubsystem level may not be consistent with system- process. It is easier for a model to be accepted whenlevel optimization. Another such pitfall occurs when a its documentation is complete and open for comment.key effectiveness variable is not represented in the Proprietary models often suffer from a lack of ac-cost models. For example, if spacecraft reliability is a ceptance because of a lack of transparency.key variable in the system effectiveness equation, and Upfront user friendliness is related to theif that reliability does not appear as a variable in the ease with which the system engineer can learn to usespacecraft cost model, then there is an important the model and prepare the inputs to it. Backend userdisconnect. This is because the models allow the friendliness is related to the effort needed to interpretspacecraft designer to be-lieve it is possible to boost the models results and to prepare trade study reportsthe effectiveness with increased reliability without for the tentative selection using the selection rule.paying any apparent cost penalty. When the modelsfail to treat such important interactions, the system 5.1.3 Selecting the Selection Ruleengineer must ensure that others do not reach falseconclusions regarding costs and effectiveness. The analytical portion of the trade study process serves to produce specific information onCharacteristics of a Good Model. In choosing a system effectiveness, its underlying performance ormodel (or models) for a trade study, it is important to technical attributes, and cost (along with uncertaintyrecognize those characteristics that a good model ranges) for a few alternative system architectureshas. This list in-cludes: (and later, system designs). These data need to be brought together so that one alternative may be• Relevance to the trade study being performed selected. This step is accomplished by applying the• Credibility in the eye of the decision maker selection rule to the data so that the alternatives may• Responsiveness be ranked in order of preference.• Transparency The structure and complexity of real world• User friendliness. decisions in systems engineering often make this ranking a difficult task. For one, securing higher Both relevance and credibility are crucial to effectiveness almost always means incurring higherthe acceptance of a model for use in trade studies. costs and/or facing greater uncertainties. In order toRelevance is determined by how well a model choose among alternatives with different levels ofaddresses the substantive cost-effectiveness issues effectiveness and costs, the system engineer mustin the trade study. A models credibility results from understand how much of one is worth in terms of thethe logical consistency of its mathematical other. An explicit cost-effectiveness objective functionrelationships, and a history of successful (i.e., correct) is seldom available to help guide the selection
  • NASA Systems Engineering Handbook Page 64 general conditions under which each is applicable; definitive guidance on which to use in each and every case has not been attempted. Table 3 first divides selection rules according to the importance of uncertainty in the trade study. This division is reflective of two different classes of decision problems —decisions to be made under conditions of certainty, and decisions to be madedecision, as any system engineer who has had to under conditions of uncertainty. Uncertainty is anmake a budget-induced system descope decision will inherent part of systems engineering, but theattest. distinction may be best explained by reference to A second, and major, problem is that an Figure 2, which is repeated here as Figure 24. In theexpression or measurement method for system former class, the measures of system effectiveness,effectiveness may not be possible to construct, even performance or technical attributes, and system costthough its underlying performance and technical for the alternatives in the trade study look like thoseattributes are easily quantified. These underlying for alternative B. In the latter class, they look likeattributes are often the same as the technical those for alternative C. When they look like those forperformance measures (TPMs) that are tracked alternative A, conditions of uncertainty should apply,during the product development process to gauge but often are not treated that way.whether the system design will meet its performance The table further divides each of the aboverequirements. In this case, system effectiveness may, classes of decision problems into two furtherat best, have several irreducible dimensions. categories: those that apply when cost and What selection rule should be used has been effectiveness measures are scalar quantities, andthe subject of many books and articles in the decision thus suffice to guide the system engineer to the bestsciences —management science, operations alternative, and those that apply when cost andresearch and economics. A number of selection rules effectiveness cannot be represented as scalarare applicable to NASA trade studies. Which one quantities.should be used in a particular trade study depends ona number of factors: Selection Rules When Uncertainty Is Subordinate, or Not Considered. Selecting the alternative that• The level of resolution in the system design The maximizes net benefits (benefits minus costs) is the phase of the project life cycle rule used in most cost-benefit analyses. Cost-benefit• Whether the project maintains an overall system analysis applies, however, only when the return on a effectiveness model project can be measured in the same units as the• How much less-quantifiable, subjective factors costs, as, for example, in its classical application of contribute to the selection evaluating water resource projects.• Whether uncertainty is paramount, or can effec- tively be treated as a subordinate issue Another selection rule is to choose the• Whether the alternatives consist of a few alternative that maximizes effectiveness for a given qualitatively different architectures designs, or level of cost. This rule is applicable when system many similar ones that differ only in some effectiveness and system cost can be unambiguously quantitative dimensions. measured, and the appropriate level of cost is known. Since the purpose of the selection rule is to compare This handbook can only suggest some and rank the alternatives, practical applicationselection rules for NASA trade studies, and some requires that each of the alternatives be placed on an
  • NASA Systems Engineering Handbook Page 65equal cost basis. For certain types of trade studies,this does not present a problem. For example, The Analytic Hierarchy Processchanging system size or output, or the number ofplatforms or instruments, may suffice. In other types AHP is a decision technique in which a figure ofof trade studies, this may not be possible. merit is determined for each of several A related selection rule is to choose the alternatives through a series of pair-wisealternative that minimizes cost for a given level of comparisons. AHP is normally done in six steps:effectiveness. This rule presupposes that system (1) Describe in summary form the alternativeseffectiveness and system cost can be unambiguously under consideration.measured, and the appropriate level of effectiveness (2) Develop a set of high-level evaluationis known. Again, practical application requires that objectives; for example, science data return,each of the alternatives be put on an equal national prestige, technology advancement,effectiveness basis. This rule is dual to the one above etc.in the following sense: For a given level of cost, the (3) Decompose each hi-level evaluationsame alternative would be chosen by both rules; objective into a hierarchy of evaluationsimilarly, for a given level of effectiveness, the same attributes that clarify the meaning of thealternative would be chosen by both rules. objective. When it is not practical to equalize the cost (4) Determine, generally by conductingor the effectiveness of competing alternatives, and structured interviews with selectedcost caps or effectiveness floors do not rule out all individuals (“experts”) or by having them fillalternatives save one, then it is necessary to form, out structured questionnaires, the relativeeither explicitly or implicitly, a cost-effectiveness importance of the evaluation objectives andobjective function like the one shown in Figure 4 attributes through pair-wise comparisons.(Section 2.5). The cost-effectiveness objective (5) Have each evaluator make separate pair-function provides a single measure of worth for all wise comparisons of the alternatives withcombinations of cost and effectiveness. When this respect to each evaluation attribute. Theseselection rule is applied, the alternative with the subjective evaluations are the raw datahighest value of the cost-effectiveness objective inputs to a separately developed AHPfunction is chosen. program , which produces a single figure of Another group of selection rules is needed merit for each alternative. This figure of meritwhen cost and/or effectiveness cannot be is based on relative weight determined byrepresented as scalar quantities. To choose the best the evaluators themselves.alternative, a multi-objective selection rule is needed. (6) Iterate the questionnaire and AHP evaluationA multi-objective rule seeks to select the alternative process until a consensus ranking of thethat, in some sense, represents the best balance alternative is achievedamong competing objectives. To accomplish this,each alternative is measured (by some quantitative With AHP, sometimes consensus is achievedmethod) in terms of how well it achieves each quickly; other times, several feedback rounds areobjective. For example, the objectives might be re-quired. The, feed back consists of reporting thenational prestige, upgrade or expansion potential, com-puted values (for each evaluator and for thescience data return, low cost, and potential for group) for each option, reasons for differences ininternational partnerships. Each alternatives "scores" evaluation, and identified areas of contentionagainst the objectives are then combined in a value and/or inconsistency. Individual evaluators mayfunction to yield an overall figure of merit for the choose to change their subjective judgments onalternative. The way the scores are combined should both attribute weights and preferences. At thisreflect the decision makers preference structure. The point, inconsistent and divergent preferences canalternative that maximizes the value function (i.e., with be targeted for more detailed study.the highest figure of merit) is then selected. In AHP assumes the existence of an underlyingessence, this selection rule recasts a multi-objective preference “Vector” (with magnitudes anddecision problem into one involving a single, directions) that is revealed through the pair-wisemeasurable objective. comparisons. This is a powerful assumption, One way, but not the only way, of forming which may at best hold only for the participatingthe figure of merit for each alternative is to linearly evaluators. The figure of merit produced for eachcombine its scores computed for each of the alternative is the result of the group’s subjectiveobjectives—that is, compute a weighted sum of the judgments and is not necessarily a reproduciblescores. MSFC-HDBK-1912, Systems Engineering result. For information on AHP, see Thomas L.(Volume 2) recommends this selection rule. The Saaty, The Analytic Hierarchy Process, 1980.weights used in computing the figure of merit can beassigned a priori or determined using Multi Attribute software packages are available to automate eitherUtility Theory (MAUT). Another technique of forming a MAUT or AHP. If the wrong weights, objectives, orfigure of merit is the Analytic Hierarachy Process attributes are chosen in either technique, the entire(AHP). Several microcomputer-based commercial process may obscure the best alternative. Also, with
  • NASA Systems Engineering Handbook Page 66 Multi-Attribute Utility TheoryMAUT is a decision technique in which a figure of merit (or utility) is determined for each of several alternatives through aseries of preference-revealing comparisons of simple lotteries. An abbreviated MAUT decision mechanism can bedescribed in six steps:(1) Choose a set of descriptive, but quantifiable, attributes designed to characterize each alternative.(2) For each alternative under consideration, generate values for each attribute in the set; these may be point estimates, or probability distributions, if the uncertainty in attribute values warrants explicit treatment.(3) (3) Develop an attribute utility function for each attribute in the set. Attribute utility functions range from 0 to 1; the least desirable value, xi 0 , of an attribute (over its range of plausible values) is assigned a utility value of 0, and the most desirable, xi*, is assigned a utility value of 1. That is, ui(xi 0 ) = 0 and ui(xi*) = 1. The utility value of an attribute value, xi, intermediate between the least desirable and most desirable is assessed by finding the value xi such that the decision maker is indifferent between receiving xi for sure, or, a lottery that yields xi 0 with probability pi or xi* with probability 1 - pi. From the mathematics of MAUT, ui(xi) = pi ui(xi 0 ) + (1 - pi) ui(xi*) = 1 - pi.(4) Repeat the process of indifference revealing until there are enough discrete points to approximate a continuous attribute utility function.(5) Combine the individual attribute utility functions to form a multiattribute utility function. This is also done using simple lotteries to reveal indifference between receiving a particular set of attribute values with certainty, or, a lottery of attribute values. In its simplest form, the resultant multiattribute utility function is a weighted sum of the individual attribute utility functions.(6) Evaluate each alternative using the multiattribute utility function. The most difficult problem with MAUT is getting the decision makers or evaluators to think in terms of lotteries. Thiscan often be overcome by an experienced interviewer. MAUT is based on a set of mathematical axioms about the wayindividuals should behave when confronted by uncertainty. Logical consistency in ranking alternatives is assured so longas evaluators adhere to the axioms; no guarantee can be made that this will always be the case. An extended discussionof MAUT is given in Keeney and Raiffa, Decisions with Multiple Objectives: Preferences and Value Tradeoffs, 1976. Atextbook application of MAUT to a NASA problem can be found in Jeffrey H. Smith, et al., An Application of MultiaffributeDecision Analysis to the Space Station Freedom Program, Case Study: Automation and Robotics Technol ogy Evaluation,1990. either technique, the individual evaluators may tend to cannot be described by a set of continuous reflect the institutional biases and preferences of their mathematical relationships. respective organizations. The results, therefore, may depend on the mix of evaluators. (See sidebars on Selection Rules When Uncertainty Predominates. AHP and MAUT.) When the measures of system effectiveness, Another multi-objective selection rule is to performance or technical attributes, and system cost choose the alternative with the highest figure of merit for the alternatives in the trade study look like those from among those that meet specified individual for alternative C in Figure 22, the selection of the best objectives. This selection rule is used extensively by alternative may need to be handled differently. This is Source Evaluation Boards (SEBs) in the NASA because of the general propensity of decision makers procurement process. Each proposal, from among to show risk-averse behavior when dealing with large those meeting specific technical objectives variations in cost and/or effectiveness outcomes. In (requirements), is scored on such attributes as such cases, the expected value (i.e., the mean) of technical design, price, systems engineering process some stochastic outcome variable is not a satisfactory quality, etc. In applying this rule, the attributes being point measure of that variable. scored by the SEB are known to the bidders, but their To handle this class of decision problem, the weighing may not be. (See NHB 5103.6B.) system engineer may wish to invoke a von Neumann- In trade studies where no measure of system Morgenstem selection rule. In this case, alternatives effectiveness can be constructed, but performance or are treated as "gambles" (or lotteries). The probability technical attributes can be quantified, a possible of each outcome is also known or can be subjectively selection rule is to choose the alternative that estimated, usually by creating a decision tree. The minimizes cost for given levels of performance or von Neumann-Morgenstem selection rule applies a technical attributes. This rule presupposes that separately developed utility function to each outcome, system cost can be unambiguously measured, and is and chooses the alternative that maximizes the related to the all of the quantified performance or expected utility. This selection rule is easy to apply technical attributes that are considered constraints. when the lottery outcomes can be measured in Practical application again requires that all of the dollars. Although multi-attribute cases are more alternatives be put on an equal basis with respect to complex, the principle remains the same. the performance or technical attributes. This may not The basis for the von Neumann-Morgenstem be practical for trade studies in which the alternatives selection rule is a set of mathematical axioms about
  • NASA Systems Engineering Handbook Page 67how individuals should behave when confronted byuncertainty. Practical application of this rule requires This section deals with the role of costs inan ability to enumerate each "state of nature" the systems analysis and engineering process, how to(hereafter, simply called "state"), knowledge of the measure it, how to control it, and how to obtainoutcome associated with each enumerated state for estimates of it. The reason costs and their estimateseach alternative, the probabilities for the various are of great importance in systems engineering goesstates, and a mathematical expression for the back to the principal objective of systems engineering:decision makers utility function. This selection rule fulfilling the systems goals in the most cost-effectivehas also found use in the evaluation of system manner. The cost of each alternative should be one ofprocurement alternatives. See Section 4.6.3 for a the most important outcome variables in trade studiesdiscussion of some related topics, including decision performed during the systems engineering process.analysis, decision trees, and probabilistic risk One role, then, for cost estimates is inassessment. helping to choose rationally among alternatives. Another selection rule for this class of Another is as a control mechanism during the projectdecision problem is called the minimax rule. To apply life cycle. Cost measures produced for project lifeit, the sys-tem engineer computes a loss function for cycle reviews are important in determining whethereach enumerated state for each alternative. This rule the system goals and objectives are still deemed validchooses the alternative that minimizes the maximum and achievable, and whether constraints andloss. Practical application requires an ability to boundaries are worth maintaining. These measuresenumerate each state and define the loss function. are also useful in determining whether system goalsBecause of its "worst case" feature, this rule has and objectives have properly flowed down through tofound some application in military systems. the various subsystems. As system designs and operational concepts5.1.4 Trade Study Process: Summary mature, cost estimates should mature as well. At each review, cost estimates need to be presented andSystem architecture and design decisions will be compared to the funds likely to be available tomade. The purpose of the trade study process is to complete the project. The cost estimates presented atensure that they move the design toward an optimum. early reviews must be given special attention sinceThe basic steps in that process are: they usually form the basis under which authority to proceed with the project is given. The systemUnderstand what the systems goals, objectives, and engineer must be able to provide realistic costconstraints are, and what the system must do to meet estimates to the project manager. In the absence ofthem—that is, understand the functional requirements such estimates, overruns are likely to occur, and thein the operating environment. credibility of the entire system development process,Devise some alternative means to meet the functional both internal and exter-nal, is threatened.requirements. In the early phases of the project lifecycle, this means focusing on system architectures; in 5.2.1 Life-Cycle Cost and Other Costlater phases, emphasis is given to system designs. MeasuresEvaluate these alternatives in terms of the outcomevariables (system effectiveness, its underlying A number of questions need to be addressed so thatperformance or technical attributes, and system cost). costs are properly treated in systems analysis andMathematical models are useful in this step not only engineering. These questions include:for forcing recognition of the relationships among theoutcome variables, but also for helping to determine • What costs should be counted?what the performance requirements must be • How should costs occurring at different times bequantitatively. treated?Rank the alternatives according to an appropriate • What about costs that cannot easily be measuredselection rule. in dollars?Drop less-promising alternatives and proceed to nextlevel of resolution, if needed. What Costs Should be Counted. The most comprehensive measure of the cost of an alternative This process cannot be done as an isolated is its life-cycle cost. According to NHB 7120.5, aactivity. To make it work effectively, individuals with systems life-cycle cost is "the total of the direct,different skills—system engineers, design engineers, indirect, recurring, nonrecurring, and other relatedspecialty engineers, program analysts, decision costs incurred, or estimated to be incurred in thescientists, and project managers—must cooperate. design, development, production, operation,The right quantitative methods and selection rule maintenance, support, and retirement [of it] over itsmust be used. Trade study assumptions, models, and planned life span." A less formal definition of aresults must be documented as part of the project systems life-cycle cost is the total cost of acquiring,archives. owning, and disposing of it over its entire lifetime. System life-cycle cost should be estimated and used5.2 Cost Definition and Modeling in the evaluation of alternatives during trade studies.
  • NASA Systems Engineering Handbook Page 68The system engineer should include in the life-cycle regarding the way future resources are spent can becost those resources, like civil service work-years, made. Sunk costs may alter the cost of continuingthat may not require explicit project expenditures. A with a particular alternative relative to others, butsystems life-cycle cost, when properly computed, is when choosing among alternatives, only those coststhe best measure of its cost to NASA. that remain should be counted. Life-cycle cost has several components, as At the end of its useful life, a system mayshown in Figure 25. Applying the informal definition have a positive residual or salvage value. This valueabove, life-cycle cost consists of (a) the costs of exists if the system can be sold, bartered, or used byacquiring a usable system, (b) the costs of operating another system. This value needs to be counted inand supporting it over its useful life, and (c) the cost of life-cycle cost, and is generally treated as a negativedisposing of it at the end of its useful life. cost. The system acquisition cost includes morethan the DDT&E and procurement of the hardware Costs Occurring Over Time. The life-cycle costand software; it also includes the other start-up costs combines costs that typically occur over a period ofresulting from the need for initial training of personnel, several years. Costs incurred in different years cannotinitial spares, the systems technical documentation, be treated the same because they, in fact, representsupport equipment, facilities, and any launch services different resources to society. A dollar wisely investedneeded to place the system at its intended operational today will return somewhat more than a dollar nextsite. year. Treating a dollar today the same as a dollar next The costs of operating and supporting the year ignores this potential trade.system include, but are not limited to, operationspersonnel and supporting activities, ongoingintegrated logistics support, and pre-planned productimprovement. For a major system, these costs are Discounting future costs is a way of makingoften substantial on an annual basis, and when costs occurring in different years commensurable.accumulated over years of operations can constitute When applied to a stream of future costs, thethe majority of life -cycle cost. discounting procedure yields the present discounted At the start of the project life cycle, all of value (PDV) of that stream. The effect of discountingthese costs lie in the future. At any point in the project is to reduce the contribution of costs incurred in thelife cycle, some costs will have been expended. future relative to costs incurred in the near term.These expended resources are known as sunk costs. Discounting should be performed whether or not thereFor the purpose of doing trade studies, the sunk costs is inflation, though care must be taken to ensure theof any alternative under consideration are irrelevant, right discount rate is used. (See sidebar on PDV.)no matter how large. The only costs relevant to In trade studies, different alternatives oftencurrent design trades are those that lie in the future. have cost streams that differ with respect to time. OneThe logic is straightforward: the way resources were alternative with higher acquisition costs than anotherspent in the past cannot be changed. Only decisions may offer lower operations and support costs. Without
  • Calculating Present Discounted Value Calculating the PDV is a way of reducing a streamNASA costs to a Engineering Handbookalternative of Systems single number so that Page 69 streams can be compared unambiguously. Several formulas for PDV are used, depending on whether time is to be treated as a discrete or a continuous variable, and whether the projects According to NHB 7120.5, every NASA time horizon is finite or not. The following equation program/project must: is useful for evaluating system alternatives when costs have been estimated as yearly amounts, • Develop and maintain an effective capability to and the projects anticipated useful life is T years. estimate, assess, monitor, and control its life- For alternative i, cycle cost throughout the project life cycle • Relate life-cycle cost estimates to a well-defined T PDVi = Σ Cit (1 + r) -t t=0 technical baseline, detailed project schedule, and set of cost-estimating assumptions where r is the annual discount rate and Cit is the • Identify the life-cycle costs of alternative levels of esti-mated cost of alternative i in year t. system requirements and capability Once the yearly costs have been estimated, the • Report time-phased acquisition cost and techni- choice of the discount rate is crucial to the cal parameters to NASA Headquarters. evaluation since it ultimately affects how much or how little runout costs contribute to the PDV. There are a number of actions the system While calculating the PDV is generally accepted engineer can take to effect these objectives. Early as the way to clear with costs occurring over a decisions in the systems engineering process tend to period of years, there is much disagreement and have the greatest effect on the resultant system life- confusion over the appropriate discount rate to cycle cost. Typically, by the time the preferred system apply in systems engineering trade studies. The architecture is selected, between 50 and 70 percent Office of Management and Budget (OMB) has of the systems life-cycle cost has been "locked in." By mandated the use of a rate of seven percent for the time a preliminary system design is selected, this NASA systems when constant dollars (dollars figure may be as high as 90 percent. This presents a adjusted to the price level as of some fixed point major dilemma to the system engineer, who must lead in time) are used in the equation. When nominal this selection process. Just at the time when dollars (sometimes called then-year, runout or decisions are most critical, the state of information real-year dollars) are used, the OMB-mandated about the alternatives is least certain. Uncertainty annual rate should be increased by the inflation about costs is a fact of systems engineering. rate assumed for that year. Either approach yields This suggests that efforts to acquire better essentially the same PDV. For details, see OMB information about the life-cycle cost of each Circular A-94, Guidelines and Discount Rates for alternative early in the project life-cycle (Phases A Benefit Cost Analysis of Federal Programs, and B) potentially have very high payoffs. The system October 1992 engineer needs to understand what the principal life- cycle cost drivers are. Some major questions todiscounting, it would be difficult to know which stream consider are: How much does each already on well-truly represents the lower life-cycle cost. Trade understood technology? Can the system bestudies should report the PDV of life-cycle cost for manufactured using routine processes or are highereach alternative as an outcome variable. precision processes required? What tests are needed to verify and validate each alternative system design,Difficult-To-Measure Costs. In practice, some costs and how costly are they? What reliability levels arepose special problems. These special problems, needed by each alternative? What environmental andwhich are not unique to NASA systems, usually occur safety requirements must be satisfied?in two areas: (a) when alternatives have differences in For a system whose operational life isthe irreducible chances of loss of life and (b) when expected to be long and to involve complex activities,externalities are present. Two examples of the life-cycle cost is likely to be far greater than theexternalities that impose costs are pollution caused by acquisition costs alone. Consequently, it is particularlysome launch systems and the creation of orbital important with such a system to bring in the specialtydebris. Because it is difficult to place a dollar figure on engineering disciplines such as reliability,these resource uses, they are generally called maintainability, supportability, and operationsincommensurable costs. The general treatment of engineering early in the systems engineering process,these types of costs in trade studies is not to ignore as they are essential to proper life-cycle costthem, but instead to keep track of them along with estimation.dollar costs. Another way of acquiring better information on the cost of alternatives is for the project to have5.2.2 Controlling Life-Cycle Costs independent cost estimates prepared for comparison purposes. The project manager/system engineer must Another mechanism for controlling life-cycleensure that the system life-cycle cost (established at cost is to establish a life-cycle cost managementthe end of Phase A) is initially compatible with NASAs program as part of the projects managementbudget and strategic priorities and that it approach. (Life-cycle cost management hasdemonstratively remains so over the project life cycle. traditionally been called design-to-life-cycle cost.) Such a program establishes life-cycle cost as a
  • NASA Systems Engineering Handbook Page 70design goal, perhaps with sub-goals for acquisition state of information available to the cost analyst atcosts or operations and support costs. More each point in the project life cycle. Table 4 shows thisspecifically, the objectives of a life-cycle cost dependence.management program are to:• Identify a common set of ground rules and assumptions for life-cycle cost estimation• Ensure that best-practice methods, tools, and models are used for life-cycle cost analysis• Track the estimated life-cycle cost throughout the project life cycle, and, most important• Integrate life-cycle cost considerations into the Parametric (or "top-down") cost models are design and development process via trade most useful when only a few key variables are known studies and formal change request assessments. or can be estimated. The most common example of a parametric model is the statistical Cost Estimating Trade studies and formal change request Relationship (CER). A single equation (or set ofassessments provide the means to balance the equations) is derived from a set of historical dataeffectiveness and life-cycle cost of the system. The relating one or more of a systems characteristics tocomplexity of integrating life-cycle cost considerations its cost using well-established statistical methods. Ainto the design and development process should not number of statistical CERs have been developed tobe underestimated, but neither should the benefits,which can be measured in terms of greater cost- Statistical Cost Estimating Relationships: Exampleeffectiveness. The existence of a rich set of potential and Pitfallslife-cycle cost trades makes this complexity evengreater. One model familiar to most cost analysts is the The Space Station Alpha Program provides historically based CER. In its usual form, thismany examples of such potential trades. As one model is a linear expression with cost (theexample, consider the life-cycle cost effect of dependent variable) as a function of one or moreincreasing the mean time between failures (MTBF) of descriptive characteristics. The coefficients of theAlphas Orbital Replacement Units (ORUs). This is linear expression are estimated by fitting historicallikely to increase the acquisition cost, and may data from previously completed projects of aincrease the mass of the station. However, annual similar nature using statistical regressionmaintenance hours and the weight of annual techniques. This type of model is analytic andreplacement spares will decline. The same station deterministic. An example of this type of model foravailability may be achieved with fewer on-orbit estimating the first unit cost, C, of a space-spares, thus saving precious internal volume used for qualified Earth-orbiting receiver/exciter is:spares storage. For ORUs external to the station, theamount of extravehicular activity, with its associated In C = 3.822 + 1.110 In W + 0.436 In zlogistics support, will also decline. With such complexinteractions, it is difficult to know what the optimum where W is the receiver/exciters weight, and z ispoint is. At a minimum, the system engineer must the number of receiver boxes; In is the naturalhave the capability to assess the life-cycle cost of logarithm function. (Source: U.S. Air Forceeach alternative. (See Appendix B.8 on the operations Systems Command-Space Division, Unmannedand operations cost effects of ORU MTBF and Space Vehicle Cost Model, Seventh Edition,Section 6.5 on Integrated Logistics Support.) August 1994.) CERs are used extensively in advanced technology systems, and have been challenged on both theoretical and practical5.2.3 Cost Estimating grounds. One challenge can be mounted on the basis of the assumption of an unchanging The techniques used to estimate each life- relationship between cost and the independentcycle cost component usually change as the project variables. Others have questioned the validity oflife cycle proceeds. Methods and tools used to CERs based on weight, a common indevariable insupport budget estimates and life-cycle cost trades in many models, in light of advances in electronicPhase A may not be sufficiently detailed to support packaging and composite materials. Objections tothose activities during Phase C/D. Further, as the using statistical CERs also include problems ofproject life cycle proceeds, the requirements and the input accuracy, low statistical significance due tosystem design mature as well, revealing greater detail limited data points, ignoring the statisticalin the Work Breakdown Structure (WBS). This should confidence bands, and, lastly, biases in theenable the application of cost estimating techniques underlying data.at a greater resolution. Three techniques are described below --parametric cost models, analogy, and grass-roots.Typically, the choice of technique depends on the
  • NASA Systems Engineering Handbook Page 71estimate a spacecrafts hardware acquisition cost. Grass-roots (or "bottom-up") estimates areThese typically use an estimate of its weight and the result of rolling up the costs estimated by eachother characteristics, such as design complexity and organization performing work described in the WBS.inheritance, to obtain an estimate of cost. Similarly, Properly done, grass-roots estimates can be quitesoftware CERs have been developed as well, relying accurate, but each time a `what if" question is raised,on judgments about source lines of code and other a new estimate needs to be made. Each change offactors to obtain development costs. (See sidebar on assumptions voids at least part of the old estimate.statistical CERs.) Because the process of obtaining grassroots Another type of parametric model relies on estimates is typically time-consuming and labor-accepted relationships. One common example can be intensive, the number of such estimates that can befound in the application of logistics relationships to the prepared during trade studies is in reality severelyestimation of repair costs and initial and recurring limited.spares costs. The validity of these cost estimates also Whatever technique is used, the direct costdepends on the quality of the input parameters. of each hardware and software element often needs The principal advantages of parametric cost to be "wrapped" (multiplied by a factor greater thanmodels are that the results are reproducible, are more one) to cover the costs of integration and test,easily documented than other methods, and often can program management systems engineering, etc.be produced with the least amount of time and effort. These additional costs are called system-level costs,This makes a properly constructed performance- and are often calculated as percentages of the directbased parametric cost model useful in early trade costs.studies. Analogy is another way of estimating costs. Using Parametric Cost Models. A number ofWhen a new system or component has functional and parametric cost models are available for costingperformance characteristics similar to an existing one NASA systems. Some of these are shown in Table 5.whose cost is known, the known cost can be adjusted Unfortunately, none alone is sufficient to estimate life-to reflect engineering judgments about differences. cycle cost. Assembling an estimate of life-cycle cost often requires that several different models (along with the other two techniques) be used together. To integrate the costs being estimated by these different models, the system engineer should ensure that the inputs to and assumptions of the models are consistent, that all relevant life-cycle cost components are covered, and that the timing of costs is correct. The system engineer may sometimes find it necessary to make some adjustments to model results to achieve a life-cycle cost estimate. One such situation occurs when the results of different models, whose estimates are expressed in different year constant dollars, must be combined. In that case, an appropriate inflation factor must be applied. Another such situation arises when a model produces a cost estimate for the first unit of a hardware item, but the project requires multiple units. In that case, a learning curve can be applied to the first unit cost to obtain the required multiple-unit estimate. (See sidebar on learning curve theory.) A third situation requiring additional calculation occurs when a model provides a cost estimate of the total
  • NASA Systems Engineering Handbook Page 72 Learning Curve TheoryThe learning curve (also known as the progress orexperience curve) is the time-honored way ofdealing with the empirical observation that the unitof fabricating multiple units of complex systemslike aircraft and spacecraft tends to decline as thenumber increases. In its usual form, the theorystates that as the total quantity produced doubles,the cost per unit decreases by a constantpercentage. The cost per unit may be either theaverage cost over the number produced, or the An Example of a Cost Spreader Function: Thecost of the last unit produced. In the first case, the Beta Curvecurve is generally known as the cumulativeaverage learning curve; in the second case, it is One technique for spreading estimated acquisitionknown as the unit learning curve. Both costs over time is to apply the beta curve. Thisformulations have essentially the same rate of fifth-degree polynomial, which was developed atlearning. Let C(1) be the unit cost of the first JSC in the late 1960s, expresses the cumulativeproduction unit, and C(Q) be the unit cost of the Q cost fraction as a function of the cumulative time fraction, T:th production unit, then learning curve theorystates there is a number, b, such that C(Q) = C(1) Q b Cum Cost Fraction = 10T 2 (1 - T)2 (A + BT) + T 4 (5 - 4T) for 0 ≤T ≤1.The number b is specified by the rate of learning.A 90 percent learning rate means that the unit A and B are parameters (with 0 ≤A + B ≤1 ) thatcost of the second production unit is 90 percent of determine the shape of the beta curve. Inthe first production unit cost; the unit cost of the particular, these parameters control what fractionfourth is 90 percent of the unit cost of the second, of the cumulative cost has been expended whenand so on. In general, the ratio of C(2Q) to C(Q) is 50 percent of the cumulative time has beenthe learning rate, LR, expressed as a decimal; reached. The figure below shows three examples:using the above equation, b = In (LR)/ln 2, where with A = 1 and B = 0 as in curve (1), 81 percent ofIn is the natural logarithm. the costs have been expended at 50 percent of Learning curve theory may not always be the cumulative time; with A = 0 and B = 1 as inapplicable because, for example, the time rate of curve (2), 50 percent of the costs have beenproduction has no effect on the basic equation. expended at 50 percent of the cumulative time; inFor more detail on learning curves, including curve (3) with A = B = 0, its 19 percentempirical studies and tables for various learningrates, see Harold Asher, Cost -QuantityRelationships in the Airframe Industry, R-291, TheRand Corporation, 1956.. Typically, JSC uses a 50 percent profile with A = 0 and B = 1, or a 60 percent profile with A = 0.32 and B = 0.68, based on data from previous projects.
  • NASA Systems Engineering Handbook Page 73acquisition effort, but doesnt take into account the underlying system performance or technical attributesmulti-year nature of that effort. The system engineer that mathematically determine system effectiveness.can use a set of "annual cost spreaders" based on the (Note that each of the above facets may have severaltypical ramping-up and subsequent ramping-down of dimensions. If this is the case, then each dimensionacquisition costs for that type of project. (See sidebar can be considered a function of the underlying systemon beta curves.) Although some general parametric performance or technical attributes.) Ideally, there is acost models for space systems are already available, strong connection between the system functionaltheir proper use usually requires a considerable analysis, system effectiveness measure, and theinvestment in reaming time. For projects outside of functional and performance requirements. The samethe domains of these existing cost models, new cost functional analysis that results in the functional re-models may be needed to support trade studies. quirements flowdown also yields the systemEfforts to develop these need to begin early in the effectiveness and performance measures that areproject life cycle to ensure their timely application optimized (through trade studies) to produce theduring the systems engineering process. Whether system performance requirements.existing models or newly created ones are used, the An effectiveness measurement method orSEMP and its associated life-cycle cost management model should provide trustworthy relationshipsplan should identify which (and how) models are to be between these underlying performance or technicalused during each phase of the project life cycle. attributes and the measure of system effectiveness. Early in the project life cycle, the effectiveness model5.3 Effectiveness Definition and Modeling may embody simple parametric relationships among the high-level performance and technical attributes The concept of system effectiveness is more and the measure of system effectiveness. In the laterelusive than that of cost. Yet, it is also one of the most phases of the project life cycle, the effectivenessimportant factors to consider in trade studies. In model may use more complex relationships requiringselecting among alternatives, the system engineer more detailed, specific data on operational scenariosmust take into account system effectiveness, even and on each of the alternatives. In other words, earlywhen it is difficult to define and measure reliably. effectiveness modeling during architecture trade A measure of system effectiveness studies may take a functional view, while laterdescribes the accomplishment of the systems goals modeling during design trade studies may shift to aand objectives quantitatively. Each system (or family product view. This is not unlike the progression of theof systems with identical goals and objectives) has its cost modeling from simple parametrics to moreown measure of system effectiveness. There is no detailed grass-roots estimates.universal measure of effectiveness for NASA The system engineer must tailor the effectivenesssystems, and no natural units with which to express measure and its measurement method to theeffectiveness. Further, effectiveness is dependent on resolution of operational concept mature,the context (i.e., project or supersystem) in which thesystem is being operated, and any measure of it must Practical Pitfalls in Using Effectiveness Measures intake this into account. The system engineer can, Trade Studieshowever, exploit a tew basic, common features ofsystem effectiveness in developing strategies for Obtaining trustworthy relationships among themeasuring it. system performance or technical attributes and system effec-tiveness is often difficult, Purported5.3.1 Strategies for Measuring System effectiveness mod -els often only treat one or twoEffectiveness of the facets described in the text. Supporting models may not have been properly integrated. System effectiveness is almost always Data are often incomplete or unreliable. Undermultifaceted, and is typically the result of the these conditions, reported system effectivenesscombined effects of: results for different alternatives in a trade study may show only the relative effectiveness of the• System output quality • Size or quantity of system alternatives within the context of the trade study. output The system engineer must recognize the practical• System coverage or comprehensiveness pitfalls of using such results.• System output timeliness• System availability. effectiveness estimates should mature as well. The A measure of effectiveness and its system engineer must be able to provide realisticmeasurement method (i.e., model) should focus on estimates of system effectiveness and its underlyingthe critical facet (or facets) of effectiveness for the performance and technical attributes not only for tradetrade study issue under consideration. Which facets studies, but for project management through theare critical can often be deduced from the tracking of TPMs.accompanying functional analysis. The functional This discussion so far has been predicatedanalysis is also very useful in helping to identify the on one accepted measure of system effectiveness.
  • NASA Systems Engineering Handbook Page 74The job of computing system effectiveness is flight systems. No attempt has been made toconsiderably easier when the system engineer has a enumerate all possible performance or technicalsingle measure and measurement method (model). attributes, or to fill in each possible entry in the table;But, as with costs, a single measure may not be its purpose is illustrative only.possible. When it does not exist, the system engineer For many of the systems shown in the table,must fall back to computing the critical high-level, but system effectiveness is largely driven by continual (ornevertheless still underlying, system performance or continuous) operations at some level of output over atechnical attributes. In effect, these high-level period of years. This is in contradistinction to anperformance or technical attributes are elevated to the Apollo-type project, in which the effectiveness isstatus of measures of (system) effectiveness (MoEs) largely determined by the successful completion of afor trade study purposes, even though they do not single flight within a clearly specified time horizon.represent a truly comprehensive measure of system The measures of effectiveness in these two cases areeffectiveness. correspondingly different. In the former case (with its These high-level performance or technical lengthy operational phase and continual output),attributes might represent one of the facets described system effectiveness measures need to incorporateabove, or they may be only components of one. They quantitative measures of availability. The systemare likely to re quire knowledge or estimates of lower- engineer accomplishes that through the involvementorder performance or technical attributes. Figure 26 of the specialty engineers and the application ofshows how system effectiveness might look in an specialized models described in the next section.hierarchical tree structure. This figure corresponds, insome sense, to Figure 25 on life-cycle cost, though 5.3.3 Availability and Logistics Supportabilityrolling up by simple addition obviously does not apply Modelingto system effectiveness. Lastly, it must be recognized that system One reason for emphasizing availability andeffectiveness, like system cost, is uncertain. This fact logistics supportability in this chapter is that futureis given a fuller treatment in Section 5.4. NASA systems are less likely to be of the "launch- and-logistically forget" type. To the extent that logistic5.3.2 NASA System Effectiveness Measures support considerations are major determinants of system effectiveness during operations, it is essential The facets of system effectiveness in Figure that logistics support be thoroughly analyzed in trade26 are generic. Not all must apply to a particular studies during the earlier phases of the project lifesystem. The system engineer must determine which cycle. A second reason is that availability and logisticsperformance or technical attributes make up system supportability have been rich domains foreffectiveness, and how they should be combined, on methodology and model development. The increasinga system-by-system basis. Table 6 provides sophistication of the methods and models has allowedexamples of how each facet of system effectiveness the system-wide effects of different supportcould be interpreted for specific classes of NASA
  • NASA Systems Engineering Handbook Page 75alternatives to be more easily predicted. In turn, this inexact. Some logistics supportability models maymeans more opportunities to improve system deal with a single resource; others may deal witheffectiveness (or to lower life-cycle cost) through the several resources simultaneously. They may take theintegration of logistics considerations in the system form of a simple spreadsheet or a large computerdesign. simulation. Greater capability from these types of Availability models relate system design and models is gen-erally achieved only at greater expenseintegrated logistics support technical attributes to the in time and effort. The system engineer mustavailability component of the system effectiveness determine what availability and logistics supportabilitymeasure. This type of model predicts the resulting models are needed for each new system, taking intosystem availability as a function of the system account the unique operations and logistics conceptscomponent failure and repair rates and the logistics and environment of that system. Generally both typessupport resources and policies. (See sidebar on of models are needed in the trade study process tomeasures of availability.) transform specialty engineering data into forms more Logistics supportability models relate system useful to the system engineer. Which availability anddesign and integrated logistics support technical logistics supportability models are used during eachattributes to one or more "resource requirements" phase of the protect life cycle should be identified inneeded to operate the system in the accomplishment the SEMP.of its goals and objectives. This type of model Another role for these models is to providefocuses, for example, on the system maintenance quantitative requirements for incorporation into therequirements, number and location of spares, systems formal Integrated Logistics Support (ILS)processing facility requirements, and even optimal Plan. Figure 27 shows the role of availability andinspection policies. In the past, logistics supportability logistics supportability models in the trade studymodels have typically been based on measures process.pertaining to that particular resource or function alone. Essential to obtaining useful products fromFor example, a systems desired inventory of spares any availability and/or logistics supportability model iswas detemmined on the basis of meeting measures of the collection of high quality specialty engineeringsupply efficiency, such as percent of demands met. data for each alternative system design. (Some ofThis tended to lead to suboptimal resource these data are also used in probabilistic riskrequirements from the systems point of view. More assessments performed in risk managementmodem models of logistics supportability base activities.) The system engineer must coordinateresource requirements on the system availability efforts to collect and maintain these data in a formateffects. (See sidebar on logistics supportability suitable to the trade studies being performed. Thismodels.) task is made considerably easier by using digital databases in relational table formats such as the one Some availability models can be used to currently under development for MIL-STD- 1388-2B.determine a logistics resource requirement by Continuing availability and logisticscomputing the quantity of that resource needed to supportability modeling and data collection throughachieve a particular level of availability, holding other the operations phase permits operations trendlogistics resources fixed. The line between availability analysis and assessment on the system (e.g., ismodels and logistics supportability models can be system availability declining or improving?) In general,
  • NASA Systems Engineering Handbook Page 76 Measures of Availability Availability can be calculated as the ratio of operating time to total time, where the denominator, total time, can be divided into operating time ("uptime") and "downtime." System availability depends on any factor that contributes to downtime. Underpinning system availability, then, are the reliability and maintainability attributes of the system design, but other logistics support factors can also play significant roles. If these attributes and support factors, and the operat-ing environment of the system are unchanging, then several measures of steady-state availability can be readily calcu-lated. (When steady-state conditions do not apply, availability can be calculated, but is made considerably more complex by the dynamic nature of the underlying conditions.) The system engineer should be familiar with the equations below describing three concepts of steady-state availability for systems that can be repaired. • Inherent = MTTF / (MTTF + MTTR) • Achieved = MTTMA / (MTTMA + MMT) • Operational = MTTMA / (MTTMA + MMT + MLDT) = MTTMA / (MTTMA + MDT) where: MTTF = Mean time to failure MTTR = Mean time to repair (corrective) MTTMA = Mean time to a maintenance action (corrective and preventive) MMT = Mean (active) maintenance time (corrective and preventative) MLDT = Mean logistics delay time (includes downtime due to administrative delays, and waiting for spares, maintenance personnel, or supplies) MDT = Mean downtime (includes downtime due to (active) maintenance and logistics delays) Availability measures can be also calculated at a point in time, or as an average over a period of time. A further, but manageable, complication in calculating availability takes into account degraded modes of operation for redundant systems. For systems that cannot be repaired, availability and reliability are equal. (See sidebar on page 92.) this kind of analysis and assessment is extremely effectiveness is needed when point estimates for useful in identifying potential areas for product these outcome variables do not "tell the whole improvement such as greater system reliability, lower story"—that is, when information about the variability cost logistics support, and better maintenance and in a systems projected cost and effectiveness is spares poli cies. (See Section 6.5 for more on relevant to making the right choices about that Integrated Logistics Support.) system. When these uncertainties have the potential to drive a decision, the systems or program analyst 5.4 Probabilistic Treatment of Cost and must do more than just acknowledge that they exist. Effectiveness Some useful techniques for modeling the effects of uncertainty are described below in Section 5.4.2. A probabilistic treatment of cost and These techniques can be applied to both cost models Logistics Supportability Models:Two Examples Logistics supportability models utilize the reliability and maintainability attributes a particular system design,and other logistics system variables, to quantify the demands (i.e., requirements) for scarce logistics resources duringoperations. The models described here were both developed for Space Station Freedom. One is a stochastic simulation inwhich each run is a "trial" drawn from a population of outcomes. Multiple runs must be made to develop accurate estimatesof means and variances for the variables of interest. The other is a deterministic analytic model. Logistic supportabilitymodels may be of either type. These two models deal with the unique logistics environment of Freedom. SIMSYLS is a comprehensive stochastic simulation of on-orbit maintenance and logistics resupply of Freedom. Itprovides estimates of the demand (means and variances) for maintenance resources such as EVA and IVA, as well as forlogistics upmass and downmass resources. In addition to the effects of actual and false ORU failures, the effects ofvarious other stochastic events such as launch vehicle and ground repair delays can be quantified. SIMSYLS alsoproduces several measures of operational availability. The model can be used in its availability mode or in its resourcerequirements mode. M-SPARE is an availability-based optimal spares model. It determines the mix of ORU spares at any sparesbudget level that maximizes station availability, defined as the probability that no ORU had more demands during aresupply cycle than it had spares to satisfy those demands. Unlike SIMSYLS, M-SPAREs availability measure deals onlywith the effect of spares. M-SPARE starts with a target availability (or budget) and determines the optimal inventory, acapability not possessed by SIMSYLS. For more detail, see DeJulio, E., SIMSYLS Users Guide, Boeing Aerospace Operations, February 1990, andKline, Robert, et al., The M-SPARE Model, LMI, NS901R1, March 1990.
  • NASA Systems Engineering Handbook Page 77and effectiveness models, though the majority of Another set of model uncertainties thatexamples given are for cost models. contribute to the uncertainty in the cost estimate concerns the model coefficients that have been estimated from historical data. Even in a well-behaved5.4.1 Sources of Uncertainty in Models statistical regression equation, the estimated coefficients could have resulted from chance alone, There are a number a sources of uncertainty and therefore cost predictions made with the modelin the kinds of models used in systems analysis. have to be stated in probabilistic terms. (Fortunately,Briefly, these are: the upper and lower bounds on cost for any desired level of confidence can be easily calculated.Uncertainty about the correctness of the models Presenting this information along with the coststructural equations, in particular whether the estimate is strongly recommended.)functional form chosen by the modeler is the best The above uncertainties are present even ifrepresentation of the relationship between an the cost model inputs that describe a new system areequations inputs and output precisely known in Phase A. This is rarely true; moreUncertainty in model parameters, which are, in a very often, model inputs are subject to considerablereal sense, also chosen by the modeler; this guesswork early in the project life cycle. Theuncertainty is evident for model coefficients derived uncertainty in a model input can be expressed byfrom statistical regression, but even known physical attributing a probability distribution to it. This appliesconstants are subject to some uncertainty due to whether the input is a physical measure such asexperimental or measurement error weight, or a subjective measure such as a "complexityUncertainty in the true value of model inputs (e.g., factor." Model input uncertainty can extend even to aestimated weight or thermal properties) that describe grass-roots cost model that might be used in Phasesa new system. C and D. In that case, the source of uncertainty is the failure to identify and capture the "unknown- As an example, consider a cost model unknowns." The model inputs -- the costs estimatedconsisting of one or more statistical CERs. In the by each performing organization -- can then beearly phases of the project life cycle (Phases A and thought of as variables having various probabilityB), this kind of model is commonly used to provide a distributions.cost estimate for a new NASA system. The projectmanager needs to understand what confidence 5.4.2 Modeling Techniques for Handlinghe/she can have in that estimate. Uncertainty One set of uncertainties concerns whetherthe input variables (for example, weight) are the The effect of model uncertainties is to induceproper explanatory variables for cost, and whether a uncertainty in the models output. Quantifying theselinear or log-linear form is more appropriate. Model uncertainties involves producing an overall probabilitymisspecification is by no means rare, even for strictly distribution for the output variable, either in terms ofengineering relationships. its probability density function (or mass function for discrete output variables) or its cumulative distribution
  • NASA Systems Engineering Handbook Page 78function. (See sidebar on cost S-curves.) Some efficient in many situations than Monte Carlotechniques for this are: simulation, described next.• Analytic solution Monte Carlo Simulation. This technique is often• Decision analysis used to calculate an approximate solution to a• Monte Carlo simulation. stochastic model that is too complicated to be solved by analytic methods alone. A Monte Carlo simulation The Cost S-Curve is a way of sampling input points from their respective domains in order to estimate the probability The cost S-curve gives the probability of a distribution of the output variable. In a simple Monte projects cost not exceeding a given cost estimate. Carlo analysis, a value for each uncertain input is This probability is sometimes called the budget drawn at random from its probability distribution, confidence level. This curve aids in establishing which can be either discrete or continuous. This set of the amount of contingency and Allowance for random values, one for each input, is used to Program Adjustment (APA) funds to set aside as compute the corresponding output value, as shown in a reserve against risk. Figure 28. The entire process is then repeated k times. These k output values constitute a random sample from the probability distribution over the output variable induced by the input probability distributions. For an example of the usefulness of this technique, recall Figures 2 (in Chapter 2) and 24 (this chapter), which show the projected cost and effectiveness of three alternative design concepts as probability "clouds." These clouds may be reasonably interpreted as the result of three system-level Monte Carlo simulations. The information displayed by the clouds is far greater than that embodied in point estimates for each of the alternatives. An advantage of the Monte Carlo technique is that standard statistical tests can be applied to In the S-curve shown above, the projects estimate the precision of the resulting probability cost commitment provides only a 40 percent level distribution. This permits a calculation of the number of confidence; with reserves, the level is increased of runs (samples) needed to obtain a given level of to 50 percent. The steepness of the S-curve tells precision. If computing time or costs are a significant the project manager how much the level of constraint, there are several ways of reducing them confidence improves when a small amount of through more deliberate sampling strategies. See reserves are added. Note that an Estimate at Completion (EAC) S-curve could be used in conjunction with the risk management approach described for TPMs (see Section 4.9.2), as another method of cost status reporting and assessment meet.Analytic Solution. When the structure of a modeland its uncertainties permit, a closed-form analyticsolution for the required probability density (orcumulative distribution) function is sometimesfeasible. Examples can be found in simple reliabilitymodels (see Figure 29).Decision Analysis. This technique, which wasdiscussed in Section 4.6, also can produce acumulative distribution function, though it is necessaryto descretize any continuous input probabilitydistributions. The more probability intervals that areused, the greater the accuracy of the results, but thelarger the decision tree. Furthermore, each uncertainmodel input adds more than linear computationalcomplexity to that tree, making this technique less
  • NASA Systems Engineering Handbook Page 79MSFC-HDBK-1912, Systems Engineering (Volume 2),for a discussion of these strategies. Commercial software to perform Monte Carlosimulation is available. These include add-inpackages for some of the popular spreadsheets, aswell as packages that allow the systems or programanalyst to build an entire Monte Carlo model fromscratch on a personal computer. These packagesgenerally perform the needed computations in anefficient manner and provide graphical displays of theresults, which is very helpful in communicatingprobabilistic information. For large applications ofMonte Carlo simulation, such as those used inaddressing logistics supportability, custom softwaremay be needed. (See the sidebar on logisticssupportability models.) Monte Carlo simulation is a fairly easytechnique to apply. Also, what a particularcombination of uncertainties mean can often becommunicated more clearly to managers. A powerfulexample of this technique applied to NASA flightreadiness certification is found in Moore, Ebbeler, andCreager, who combine Monte Carlo simulation withtraditional reliability and risk analysis techniques.
  • NASA Systems Engineering Handbook Page 806 Integrating Engineering Specialties Into the plans.Systems Engineering Process As part of the technical effort, specialty engineers often perform tasks that are common This chapter discusses the basic concepts, across disciplines. Foremost, they apply specializedtech-niques, and products of some of the specialty analytical techniques to create information needed byengineering disciplines, and how they fit into the the project manager and system engineer. They alsosystems engineering process. help define and write system requirements in their areas of expertise, and they review data packages,6.1 Role of the Engineering Specialties engineering change requests (ECRs), test results, and documentation for major project reviews. The Specialty engineers support the systems project manager and/or system engineer need toengineering process by applying specific knowledge ensure that the information and products soand analytic methods from a variety of engineering generated add value to the project commensuratespecialty disciplines to ensure that the resulting with their cost.system is actually able to perform its mission in its The specialty engineering technical effortoperational environment. These specialty engineering should also be well integrated both in time anddisciplines typically include reliability, maintainability, content, not separate organizations and disciplinesintegrated logistics, test, fabrication/production, operating in near isolation (i.e., more like a basketballhuman factors, quality assurance, and safety team, rather than a golf foursome). This means, as anengineering. One view of the role of the engineering example, that the reliability engineers FMECA (orspecialties, then, is mission assurance. Part of the equivalent analysis) results are passed at the rightsystem engineers job is to see that these mission time to the maintainability engineer, whoseassurance functions are coherently integrated into the maintenance analysis is subsequently incorporatedproject at the right times and that they address the into the logistics support analysis (LSA). LSA results,relevant issues. in turn, are passed to the project-level system Another idea used to explain the role of the engineer in time to be combined with other cost andengineering specialties is the "Design-for-X" concept. effectiveness data for a major trade study.The X stands for any of the engineering "ilities" (e.g., Concurrently, the reliability engineers FMECA resultsreliability, testability, producibility, supportability) that are also passed to the risk manager to incorporatethe project level system engineer needs to consider to critical items into the Critical Items List (CIL) whenmeet the projects goals/objectives. While the relevant deemed necessary, and to alert the PDT to developengineering specialties may vary on NASA projects appropriate design or operations mitigation strategies.by virtue of their diverse nature, some are always The quality assurance engineers effort should be in-needed. It is the system engineers job to identify the tegrated with the reliability engineers so that, forparticular engineering specialities needed for his/her example, component failure rate assumptions in thetailored Product Development Team (PDT). The latters reliability model are achieved or bettered byselected organizational approach to integrating the the actual (flight) hardware. This kind of processengineering specialities into the systems engineering harmony and timeliness is not easily realized in aprocess and the technical effort to be made should be project; it nevertheless remains a goal of systemssummarized in the SEMP (Part III). Depending on the engineering.nature and scope of the project, the technical effortmay also need more detailed documentation in the 6.2 Reliabilityform of individual specialty engineering program Reliability Relationships The system engineer should be familiar with the following reliability parameters and mathematical relationships for continuously operated systems Many reliability analyses assume that failures are random so that λ(t) = λ and the failure probability density follows an exponential distribution. In that case, R(t) = exp (-λt), and the Mean Time To Failure (MTTF) = 1/λ. Another popular assumption that has been shown to apply to many systems is a failure probability density that follows a Weibull distribution; in that case, the hazard rate λ(t) satisfies a simple power law as a function of t. With the proper choice of Weibull parameters, the constant hazard rate can be recovered as a special case. While these (or similar) assumptions may be analytically convenient, a systems actual hazard rate may be less predictable. (Also see bathtub curve sidebar!)
  • NASA Systems Engineering Handbook Page 81 Reliability can be defined as the probability The reliability engineer works with otherthat a device, product, or system will not fail for a specialty engineers (e.g., the quality assurance,given period of time under specified operating maintainability, verification, and producibilityconditions. Reliability is an inherent system design engineers) on system reliability issues. On smallcharacteristic. As a principal contributing factor in projects, the reliability engineer may perform some oroperations and support costs and in system all of these other jobs as well.effectiveness (see Figure 26), reliability plays a keyrole in determining the systems cost-effectiveness. 6.2.2 Reliability Program Planning6.2.1 Role of the Reliability Engineer The reliability program for a project describes what activities will be undertaken in support Reliability engineering is a major specialty of reliability engineering. The reliability engineerdisci-pline that contributes to the goal of a cost- develops a reliability program considering its cost,effective system. This is primarily accomplished in the schedule, and risk implications. This planning shouldsystems engineering process through an active role in begin during Phase A. The project manager/systemimplementing specific design features to ensure that engineer must work with the reliability engineer tothe system can perform in the predicted physical develop an appropriate reliability program as manyenvironments throughout the mission, and by making factors need to be considered in developing thisindependent predictions of system reliability for program. These factors include:design trades and for (test program, operations, and Lunar Excursion Module (LEM) Reliabilityintegrated logistics support) planning. The reliability engineer performs several Part of the reliability engineers job is to developtasks, which are explained in more detail in NHB an understanding of the underlying physical and5300.4(1A -1), Reliability Program Requirements for human-induced causes of failures, rather thanAeronautical and Space System Contractors. In brief, assuming that all failures are random. Accordingthese tasks include: to Joseph Gavin, Director of the LEM Program at Grumman, "after about 10 years of testing of• Developing and executing a reliability program individual [LEM] components and subsystems, plan [NASA] found something like 14,000 anomalies,• Developing and refining reliability prediction only 22 of which escaped definite understanding.” models, including associated environmental (e.g., vibration, acoustic, thermal, and EMI/EMC) models, and predictions of system reliability. These models and predictions should reflect • NASA payload classification. The reliability applicable experience from previous projects. programs analytic content and its documentation• Establishing and allocating reliability goals and of problems and failures are generally more environmental design requirements extensive for a Class A payload than for a Class• Supporting design trade studies covering such D one. (See Appendix B.3 for classification issues as the degree of redundancy and reliability guidelines.) vs. maintainability • Mission environmental risks. Several mission• Supporting risk management by identifying environmental models may need to be design attributes likely to result in reliability developed. For flight projects, these include problems and recommending appropriate risk ground (transportation and handling), launch, on- mitigations orbit (Earth or other), and planetary• Developing reliability data for timely use in the environments. In addition, the reliability engineer projects maintainability and ILS programs must address design and verification• Developing environmental test requirements and requirements for each such environment. specifications for hardware qualification. The • Degree of design inheritance and reliability engineer may provide technical analysis hardware/software reuse. and justification for eliminating or relaxing qualification test requirements. These activities The reliability engineer should document the are usually closely coordinated with the projects reliability program in a reliability program plan, which verification program. should be summarized in the SEMP (Part III) and• Performing analyses on qualification test data to updated as needed through the project life cycle; the verify reliability predictions and validate the summary may be sufficient for small projects. system reliability prediction models, and to understand and resolve anomalies• Collecting reliability data under actual operations conditions as a part of overall system validation.
  • NASA Systems Engineering Handbook Page 82 • Conduct formal inspections of manufacturing6.2.3 Designing Reliable Space-Based facilities, processes, and documentationSystems • Perform acceptance testing or inspections on all parts when possible. Designing reliable space-based systemshas always been a goal for NASA, and many painful Fault Tolerance. Fault tolerance is a system designlessons have been reamed along the way. The characteristic associated with the ability of a systemsystem engineer should be aware of some basic to continue operating after a component failure hasdesign approaches for achieving reliability. These occurred. It is implemented by having designbasic approaches include fault avoidance, fault redundancy and a fault detection and responsetolerance, and functional redundancy. capability. Design redundancy can take several forms, some of which are represented in Figure 29 alongFault Avoidance. Fault avoidance, a joint objective of with their reliability relationships.the reliability engineer and quality assurance engineer(see Section 6.3), includes efforts to: Functional Redundancy. Functional redundancy is a system design and operations characteristic that• Provide design margins, or use appropriate allows the system to respond to component failures in aerating guidelines, if available a way sufficient to meet mission requirements. This• Use high-quality parts where needed. (Failure usually involves operational work-arounds and the rates for Class S parts are typically one-fourth of use of components in ways that were not originally those procured to general military specifications.) intended. As an example, a repair of the damaged• Consider materials and electronics packaging Galileo high-gain antenna was impossible, but a work- carefully around was accomplished by software fixes that further compressed the science data and images;
  • NASA Systems Engineering Handbook Page 83these were then returned through the low-gain precursors to a full probabilistic risk assessmentantenna, although at a severely reduced data rate. (PRA). For more on this technique, see the U.S. These three approaches have different costs Nuclear Regulatory Commission Fault Treeassociated with their implementation: Class S parts Handbook.are typically more expensive, while redundancy addsmass, volume, costs, and complexity to the system. Reliability Models. Reliability models are used toDifferent approaches to reliability may therefore be predict the reliability of alternativeappropriate for different projects. In order to choose architectures/designs from the estimated reliability ofthe best balance among approaches, the system each component. For simple systems, reliability canengineer must understand the system- level effects often be calculated by applying the rules of probabilityand life-cycle cost of each approach. To achieve this, to the various components and "strings" identified intrade study methods of Section 5.1 should be used in the reliability block diagram. (See Figure 29.) Forcombination with reliability analysis tools and more complex systems, the method of minimal cuttechniques. sets, which relies on the rules of Boolean algebra, is often used to evaluate a systems fault tree. When individual component reliability functions are themselves uncertain, Monte Carlo simulation methods may be appropriate. These methods are described in reliability engineering textbooks, and software for calculating reliability is widely available. For a compilation of models/software, see D. Kececioglu, Reliability, Availability, and Maintainability Software Handbook. FMECAs and FMEAs. Failure Modes, Effects, and Criticality Analysis (FMECA) and Failure Modes and Effects Analysis (FMEA) are specialized techniques for hardware failure and safety risk identification and characterization. (Also see Section 4.6.2.) Problem/Failure Reports (P/ FRs). The reliability engineer uses the Problem/Failure Reporting System (or an approved equivalent) to report reliability problems and nonconformances encountered during qualification and acceptance testing (Phase D) and operations (Phase E). 6.3 Quality Assurance Even with the best of available designs,6.2.4 Reliability Analysis Tools and hardware fabrication (and software coding) andTechniques testing are subject to the vagaries of Nature and human beings. The system engineer needs to have Reliability Block Diagrams. Reliability block some confidence that the system actually produceddiagrams are used to portray the manner in which the and delivered is in accordance with its functional,components of a complex system function together. performance, and design requirements. QualityThese diagrams compactly describe how components Assurance (QA) provides an independent assessmentare connected. Basic reliability block diagrams are to the project manager/system engineer of the itemsshown in Figure 29. produced and processes used during the project life cycle. The quality assurance engineer typically actsFault Trees and Fault Tree Analysis. A fault tree is as the system engineers eyes and ears in thisa graphical representation of the combination of faults context. The project manager/system engineer mustthat will result in the occurrence of some (undesired) work with the quality assurance engineer to develop atop event. It is usually constructed during a fault tree quality assurance program (the extent, responsibility,analysis, which is a qualitative technique to uncover and timing of QA activities) tailored to the project itcredible ways the top event can occur. In the supports. As with the reliability program, this largelyconstruction of a fault tree, successive subordinate depends on the NASA payload classification (seefailure events are identified and logically linked to the Appendix B.3).top event. The linked events form a tree structureconnected by symbols called gates, some basic 6.3.1 Role of the Quality Assurance Engineerexamples of which appear in the fault tree shown inFigure 30. Fault trees and fault tree analysis are often
  • NASA Systems Engineering Handbook Page 84 FRR) on issues of design, materials, workmanship, fabrication and verification processes, and other characteristics that could degrade product system quality. 6.3.2 Quality Assurance Tools and Techniques PCA/FCA. The Physical Configuration Audit (PCA) veri-fies that the physical configuration of the system corre-sponds to the approved "build-to" (or "code-to") docu-mentation. The Functional Configuration Audit (FCA) verifies that the acceptance verification (usually, test) results are consistent with the approved verification requirements. (See Section 4.8.4.) In-Process Inspections. The extent, timing, and location of in-process inspections are documented in the quality assurance program plan. These should be conducted in consonance with the manufacturing/fabrication and verification program plans. (See Sections 6.6 and 6.7.) QA Survey. A QA survey examines the operations, The quality assurance engineer performs pro-cedures, and documentation used in the project,several tasks, which are explained in more detail in and evaluates them against established standardsNHB 5300.4(1B), Quality Program Provisions for and benchmarks. Recommendations for correctiveAeronautical and Space System Contractors. In brief, actions are reported to the project manager.these tasks include: Material Review Board. The Material Review Board• Developing and executing a quality assurance (MRB), normally established by the project manager program plan and chaired by the project-level quality assurance• Ensuring the completeness of configuration engineer, performs formal dispositions on management procedures and documentation, nonconformances. and monitoring the fate of ECRs/ECPs (see Section 4.7) 6.4 Maintainability• Participating in the evaluation and selection of procurement sources Maintainability is a system design characteristic• Inspecting items and facilities during associated with the ease and rapidity with which the manufacturing/fabrication, and items delivered to system can be retained in operational status, or safely NASA field centers and economically restored to operational status• Ensuring the adequacy of personnel training and following a failure. Often used (though imperfect) technical documentation to be used during measures of maintainability include mean manufacturing/fabrication maintenance downtime, maintenance effort (work• Ensuring verification requirements are properly hours) per operating hour, and annual maintenance specified, especially with respect to test cost. However measured, maintainability arises from environments, test configurations, and pass/fail many factors: the system hardware and software criteria design, the required skill levels of maintenance• Monitoring qualification and acceptance tests to personnel, adequacy of diagnostic and maintenance ensure compliance with verification requirements procedures, test equipment effectiveness, and the and test procedures, and to ensure that test data physical environment under which maintenance is are correct and complete performed.• Monitoring the resolution and close-out of nonconformances and Problem/Failure Reports 6.4.1 Role of the Maintainability Engineer (P/FRs) • Verifying that the physical configuration of the system conforms to the "build-to" (or Maintainability engineering is another major specialty "code-to") documentation approved at CDR discipline that contributes to the goal of a cost-• Collecting and maintaining QA data for effective system. This is primarily accomplished in the subsequent failure analyses. systems engineering process through an active role in implementing specific design features to facilitate safe The quality assurance engineer also participates maintenance actions in the predicted physicalin major reviews (primarily SRR, PDR, CDR, and environments, and through a central role in
  • NASA Systems Engineering Handbook Page 85developing the integrated logistics support (ILS) studies early in the project life cycle. These tradesystem. (See Section 6.5 on ILS.) studies should focus on the cost effectiveness of The maintainability engineer performs alternative maintenance concepts in the context ofseveral tasks, which are explained in more detail in overall system optimization.NHB 5300.4(1E), Maintainability ProgramRequirements for Space Systems. In brief, these Maintenance Levels for Space Station Alphatasks include: • Developing and executing amaintainability pro-gram plan. This is usually done in As with many complex systems, the maintenanceconjunction with the ILS program plan. concept for Alpha calls for three maintenance levels: organizational, intermediate, and depot (or• Developing and refining the system maintenance vendor). The system engineer should be familiar concept as a part of the ILS concept with these terms and the basic characteristics• Establishing and allocating maintainability associated with each level. As an example, requirements. These requirements should be consider Alpha consistent with the maintenance concept and traceable to system-level availability objectives. Level Work Performed Spares• Performing an engineering design analysis to Organizational On-orbit crew performs ORU identify maintainability design deficiencies remove-and-replace, visual inspections, minor• Performing analyses to quantify the systems servicing and calibration. Few. maintenance resource requirements, and documenting them in the Maintenance Plan Inter-mediate Depot/ Vendor KSC maintenance• Verifying that the system’s maintainability facility repairs ORUs, performs de-tailed requirements and maintenance-related aspects inspections, servicing, calibrations, and some of the ILS requirements are met modifications.• Collecting maintenance data under actual operations conditions as part of ILS system Factory performs major overhauls, modifications, validation. and complex calibrations. needed. Extensive More extensive, or fabricated as needed. Many of the analysis tasks above areaccomplished as part of the Logistics SupportAnalysis (LSA), described in Section 6.5.3. Themaintainability engineer also participates in and The Maintenance Plan, which appears as acontributes to major project reviews on the above major technical section in the Integrated Logisticsitems as appropriate to the phase of the project. Support Plan (ILSP), documents the system maintenance concept, its maintenance resource requirements, and supporting maintainability6.4.2 The System Maintenance Concept and analyses. The Maintenance Plan provides otherMaintenance Plan inputs to the ILSP in the areas of spares, maintenance facilities, test and support equipment, As the system operations concept and user and, for each level of maintenance, it providesrequirements evolve, so does the ILS concept. maintenance training programs, facilities, technicalCentral to the latter is the system maintenance data, and aids. The supporting analyses shouldconcept. It serves as the basis for establishing the establish the feasibility and credibility of thesystems maintainability design requirements and its Maintenance Plan with aggregate estimates oflogistics support resource requirements (through the corrective and preventive maintenance workloads,LSA process). In developing the system maintenance initial and recurring spares provisioning requirements,concept, it is useful to consider the mission profile, and system availability. Aggregate estimates shouldhow the system will be used, its operational be the result of using best practice maintainabilityavailability goals, anticipated useful life, and physical analysis tools and detailed maintainability dataenviron ments. suitable for the LSA. (See Section 6.5.3.) Traditionally, a description of the systemmaintenance concept is hardware-oriented, though 6.4.3 Designing Maintainable Space-Basedthis need not always be so. The system maintenanceconcept is typically described in terms of the Systemsanticipated levels of maintenance (see sidebar onmaintenance levels), general repair policies regarding Designing NASA space-based systems forcorrective and preventive maintenance, assumptions maintainability will be even more important in theabout supply system responsiveness, the availability future. For that reason, the system engineer shouldof new or existing facilities, and the maintenance be aware of basic design features that facilitate IVAenvironment. Initially, the system maintenance and EVA maintenance. Some examples of goodconcept may be based on experience with similar practice include:systems, but it should not be exempt from trade
  • NASA Systems Engineering Handbook Page 86Use coarse and fine installation alignment guides as to ensure a minimum number of itemsnecessary to assure ease of Orbital Replacement Unit • Select ORU fasteners to minimize accessibility(ORU) installation and removal time consistent with good design practice • Design the ORU surface structure so that no• Have minimum sweep clearances between safety hazard is created during the removal, interface tools and hardware structures; include replacement, test, or checkout of any ORU during adequate clearance envelopes for those IVA or EVA maintenance; include maintenance activities where access to an cautions/warnings for mission or safety critical opening is required ORUs• Define reach envelopes, crew load/forces, and general work constraints for IVA and EVA • Design software to facilitate modifications, maintenance tasks verifica tions, and expansions• Consider corrective and preventive maintenance • Allow replacement of software segments on-line task frequencies in the location of ORUs without disrupting mission or safety critical func-• Allow replacement of an ORU without removal of tions other ORUs • Allow on- or off-line software modification, re-• Choose a system thermal design that precludes placement, or verification without introducing degradation or damage during ORU replacement hazardous conditions. or maintenance to any other ORU• Simplify ORU handling to reduce the likelihood of 6.4.4 Maintainability Analysis Tools and mishandling equipment or parts Techniques• Encourage commonality, standardization, and interchangeability of tooling and hardware items Maintenance Functional Flow Block DiagramsMaintainability Lessons Learned from HST Repair (FFBDs). Maintenance FFBDs are used in the same(STS-61) way as system FFBDs, described in Appendix B.7.1. At the top level, maintenance FFBDs supplement andWhen asked (for this handbook! what clarify the system maintenace concept; at lowermaintainability lessons were learned from their levels,mission, the STS~61 crew responded with thefollowing: Maintenance Time Lines. Maintenance time line analysis (see Appendix B.7.3) is performed when• The maintainability considerations designed time-to-restore is considered a critical factor for into the Hubble Space Telescope (HST) mission effectiveness and/or safety. (Such cases worked. might include EVA and emergency repair• For spacecraft in LEO, don’t preclude a procedures.) A maintenance time line analysis may servicing option; this means, for example, be a simple spreadsheet or, at the other end, involve including a grapple fixture even t trough it has extensive computer simulation and testing. a cost and mass impact.• When servicing is part of the maintenance FMECAs and FMEAs. Failure Modes, Effects, and concept, make sure that its applied Criti-cality Analysis (FMECA) and Failure Modes and throughout the space craft. (The HST Solar Effects Analysis (FMEA) are specialized techniques Array Electronics Box, for example, was not for hardware failure and safety risk identification and designed to be replaced, but had to be characterization. They are discussed in this handbook nevertheless!) under risk management (see Section 4.6.2) and• Pay attention to details like correctly sizing reliability engineering (see Section 6.2.4). For the the hand holds, and using connectors and maintainability engineer, the FMECA/FMEA needs to fasteners designed for easy removal and be augmented at the LRU/ORU level with failure reattachment. prediction data (i.e., MTTF or MTBF), failure detection means, and identification of corrective maintenance Other related advice: actions (for the LSA task inventory).• Make sure ground-based mock-ups and Maintainability Models. Maintainability models are draw-ings exactly represent the "as- used in assessing how well alternative designs meet deployed" con-figuration. maintainability requirements, and in quantifying the maintenance resource requirements. Modeling• Verify tool-to-system interfaces, especially approaches may range from spreadsheets that when new tools are involved. aggregate component data, to complex MarkovMake provision in the maintainability program for models and stochastic simulations. They often usehigh-fidelity maintenance training. they provide a reliability and time-to-restore data at the LRU/ORUbasis for the LSAs maintenance task inventory. level obtained from experience with similar
  • NASA Systems Engineering Handbook Page 87components in existing systems. Some typical uses to requirements necessary to ensure sustainedwhich these models are put include: operation of the system • Design Interface: the interaction and relationship• Annual maintenance hours and/or maintenance of logistics with the systems engineering process downtime estimates to ensure that supportability influences the• System MTTR and availability estimates (see definition and design of the system so as to sidebar on availability measures on page 86) reduce life-cycle cost• Trades between reliability and maintainability • Technical Data: the recorded scientific,• Optimum LRU/ORU repair level analysis (ORLA) engineering, technical, and cost information used• Optimum (reliability-centered) preventive to define, produce, test, evaluate, modify, deliver, maintenance analysis support, and operate the system• Spares requirements analysis • Training: the processes, procedures, devices,• Mass/volume estimates for (space-based) spares and equipment required to train personnel to• Repair vs. discard analysis. operate and support the system • Supply Support: actions required to provide allLSA and LSAR. The Logistics Support Analysis the necessary material to ensure the systems(LSA) is the formal technical mechanism for supportability and usability objectives are metintegrating supportability considerations into the • Test and Support Equipment: the equipment re-systems engineering process. Many of the above quired to facilitate development, production, andtools and techniques provide maintainability inputs to operation of the systemthe LSA, or are used to develop LSA outputs. Results • Transportation and Handling: the actions, re-of the LSA are captured in Logistics Support Analysis sources, and methods necessary to ensure theRecord (LSAR) data tables, which formally document proper and safe movement, handling, packaging,the baselined ILS system. (See Section 6.5.3.) and storage of system items and materials • Human Resources and Personnel Planning:Problem/Failure Reports (P/ FRs). The actions required to determine the best skills-mix,maintainability engineer uses the Problem/Failure considering current and future operator,Reporting System (or an approved equivalent) to maintenance, engineering, and administrativereport maintainability problems and nonconformances personnel costsencountered during qualification and acceptance • System Facilities: real property assets required totesting (Phase D) and operations (Phase E). develop and operate a system.6.5 Integrated Logistics Support 6.5.2 Planning for ILS The objective of Integrated Logistics Support ILS planning should begin early in the project(ILS) activities within the systems engineering life cycle, and should be documented in an ILSprocess is to ensure that the product system is program plan. This plan describes what ILS activitiessupported during development (Phase D) and are planned, and how they will be conducted andoperations (Phase E) in a cost-effective manner. This integrated into the systems engineering process. Foris primarily accomplished by early, concurrent major projects, the ILS program plan may be aconsideration of supportability characteristics, separate document because the ILS system (ILSS)performing trade studies on alternative system and may itself be a major system. For smaller projects, theILS concepts, quantifying resource requirements for SEMP (Part III) is the logical place to document sucheach ILS element using best-practice techniques, and information. An important part of planning the ILSacquiring the support items associated with each ILS program concerns the strategy to be used inelement. During operations, ILS activities support the performing the Logistics Support Analysis (LSA) sincesystem while seeking improvements in its cost- it can involve a major commitment of logisticseffectiveness by conducting analyses in response to engineering specialists. (See Section 6.5.3.)actual operational conditions. These analyses Documenting results of ILS activities throughcontinually reshape the ILS system and its resources the project life cycle is generally done in therequirements. Neglecting ILS or poor ILS decisions Integrated Logistics Support Plan (ILSP). The ILSP isinvariably have adverse effects on the life-cycle cost the senior ILS document used by the project. Aof the resultant system. preliminary ILSP should be prepared by the completion of Phase B and subsequently maintained.6.5.1 ILS Elements This plan documents the projects logistics support concept, responsibility for each ILS element by project According to NHB 7120.5, the scope of ILS phase, and LSA results, especially trade studyin-cludes the following nine elements: results. For major systems, the ILSP should be a distinct and separate part of the system• Maintenance: the process of planning and documentation. For smaller systems, the ILSP may executing life-cycle repair/services concepts and be integrated with other system documentation. The
  • NASA Systems Engineering Handbook Page 88ILSP generally contains the following technical through a well-conceived ILS program. In brief, asections: good-practice approach to achieving cost-effective• Maintenance Plan—Developed from the system ILS includes efforts to: maintenance concept and refined during the system design and LSA processes. (NMI • Develop an ILS program plan, and coordinate it 5350.1A, Maintainability and Maintenance with the SEMP (Part III) Planning Policy, and NHB 5300.4(1E), • Perform the technical portion of the plan, i.e., the Maintainability Program Re-quirements for Space Logistics Support Analysis, to select the best Systems, do not use the term ILS, but they combined system and LS alternative, and to nevertheless mandate almost all of the steps quantify the resulting logistics resource found in an LSA. See Section 6.4.2 for more requirements details on the maintenance plan.) • Document the selected ILS system and• Personnel and Training Plan—Identifies both summarize the logistics resource requirements in operator and maintenance training, including the ILSP descriptions of training programs, facilities, • Provide supportability inputs to the system equipment, technical data, and special training requirements and/or specifications aids. According to NMI 5350.1A/NHB 5300.4(1E), • Verify and validate the selected ILS system. the maintenance training element is part of the maintenance plan. 6.5.3 ILS Tools and Techniques: The• Supply Support Plan—Covers required quantities Logistics Support Analysis of spares (reparable and expendable) and consumables (identified through the LSA), and The Logistics Support Analysis (LSA) is the procedures for their procurement, packaging, formal technical mechanism for integrating handling, storage, and transportation. This plan supportability considerations into the systems should also cover such issues as inventory engineering process. The LSA is performed iteratively management, breakout screening, and demand over the project life cycle so that successive data collection and analysis. According to NMI refinements of the system design move toward the 5350.1A/NHB 5300.4(1E), the spares supportability objectives. To make this happen, the provisioning element is part of the maintenance ILS engineer identifies supportability and plan. supportability-related design factors that need to be• Test and Support Equipment Plan —Covers re- considered in trade studies during the systems quired types, geographical location, and engineering process. The project-level system quantities of test and support equipment engineer imports these considerations largely through (identified through the LSA). According to NMI their impact on projected system effectiveness and 5350.1A/NHB 5300.4(1E), it is part of the life-cycle cost. The ILS engineer also acts as a maintenance plan. system engineer (for the ILSS) by identifying ILSS• Technical Data Plan—Identifies procedures to functional requirements, performing trade studies on acquire and maintain all required technical data. the ILSS, documenting the logistics support resources According to NMI 5350.1A/NHB 5300.4(1E), that will be required, and overseeing the verification technical data for training is part of the and validation of the ILSS. maintenance plan. Transportation and Handling The LSA process found in MIL-STD-1388-1A Plan —Covers all equipment, containers, and can serve as a guideline, but its application in NASA supplies (identified through the LSA), and should be tailored to the project. Figures 31a and 31b procedures to support packaging, handling, show the LSA process in more detail as it proceeds storage, and transportation of system through the NASA project life cycle. Each iteration components uses more detailed inputs and provides more• Facilities Plan—Identifies all real property assets refinement in the output so that by the time operations required to develop, test, maintain, and operate begin (Phase E), the full complement of logistics the system, and identifies those requirements support resources has been identified and the ILSS that can be met by modifying existing facilities. It verified. The first step at each iteration is to should also provide cost and schedule understand the mission, the system projections for each new facility or modification. architecture/design, and the ILSS parameters.• Disposal Plan—Covers equipment, supplies, and Specifically, the first step encompasses the following procedures for the safe and economic disposal of activities: all items (e.g., condemned spares), including ultimately the system itself. • Receiving (from the project-level system engineer) factors related to the intended use of The cost of ILS (and hence the life-cycle cost of the system such as the operations concept,the system) is driven by the inherent reliability and mission duration, number of units, orbitmaintainability characteristics of the system design. parameters, space transportation options,The project level system engineer must ensure that allocated supportability characteristics, etc.these considerations influence the design process
  • NASA Systems Engineering Handbook Page 89• Documenting existing logistics resource supportability factors associated with the various capabilities and/or assets that may be cost- system and operations concepts being effective to apply to or combine with the ILSS for considered. Such factors might include the system being developed operations team size, system RAM (reliability,• Identifying technological opportunities that can be availability and maintain ability) parameters, exploited. (This includes both new technologies estimated annual IVA/EVA maintenance hours in the system architecture/design that reduce and upmass requirements, etc. logistics support resource requirements as well • Using the above to assist the project-level system as new technologies within the ILSS that make it engineer in projecting system effectiveness and less expensive to meet any level of logistics life cycle cost, and establishing system support resource requirements.) availability and/or system supportability goals.• Documenting the ILS concept and initial (See NHB 7120.5, and this handbook, Section "strawman" ILSS, or updating (in later phases) 5.3.3.) the baseline ILSS. • Identifying and characterizing the system supportability risks. (See NHB 7120.5, and this The ILS engineer uses the results of these handbook, Section 4.6.)activities to establish supportability and supportability- • Documenting supportability-related design con-related design factors, which are passed back to the straints.project-level system engineer. This means: The heart of the LSA lies in the next group of• Identifying and estimating the magnitude of activities, during which systems engineering and analysis are applied to the ILSS itself. The ILS
  • NASA Systems Engineering Handbook Page 90engineer must first identify the functional MIL-STD-1388-1A/2Brequirements for the ILSS. The functional analysisprocess establishes the basis for a task inventory NHB 7120.5 suggests MIL-STD 1388 as aassociated with the product system and, with the task guideline for doing an LSA. MIL-STD 1388-1A isinventory, aids in the identification of system design divided into five sections:deficiencies requiring redesign. The task inventorygenerally includes corrective and preventive LSA Planning and Control (not shown in Figuresmaintenance tasks, and other operations and support 31a and 31b)tasks arising from the ILSS functional requirements. A Mission and Support System Definition (shown asprincipal input to the in-ventory of corrective and boxes A and B)preventive maintenance tasks, which is typically Preparation and Evaluation of Alternatives (shownconstructed by the maintainability engi-neer, is the as boxes C, D, and E)FMECA/FMEA (or equivalent analysis). The Determination of Logistics Support ResourceFMECA/FMEA itself is typically performed by the reli- Requirements (shown as box F)ability engineer. The entire task inventory is Supportability Assessment (shown as boxes Gdocumented in Logistics Support Analysis Record and H.(LSAR) data tables. MIL-STD 1388-1A also provides useful The ILS engineer then creates plausible ILSS tips and encourages principles alreadyalternatives, and conducts trade studies in the established in this handbook: functional analysis,manner described earlier in Section 5.1. The trade successive refinement of designs through tradestudies focus on different issues depending on the studies, focus on system effectiveness and life-phase of the project. In Phases A and B, trade studies cycle cost, and appropriate models and selectionfocus on high-level issues such as whether a rules.spacecraft in LEO should be serviceable or not, what MIL-STD 1388-2B contains the LSAR relationalmix of logistics modules seems best to support an data table formats and data dictionaries forinhabited space station, or whats the optimum documenting ILS information and LSA results innumber of maintenance levels and locations. In machine-readable form.Phases C and D, the focus changes, for example toan individual end-items op-timum repair level. InPhase E, when the system design and its logistics storage, and testing. For new access-to-spacesupport requirements are essentially understood, systems, support may be needed during an extendedtrade studies often revisit issues in the light of period of developmental launches, and for inhabitedoperational data. These trade studies almost always space stations, during an extended period of on-orbitrely on techniques and models especially created for assembly operations. The ILS engineer alsothe purpose of doing a LSA. For a catalog of LSA contributes to risk management activities bytechniques and models, the system engineer can considering the adequacy of spares provisioning, andconsult the Logistics Support Analysis Techniques of logistics plans and processes. For example, sparesGuide (1985), Army Materiel Command Pamphlet No. provisioning must take into account the possibility that700-4. production lines will close during the anticipated By the end of Phase B, the results of the ILSS useful life of the system.functional analyses and trade studies should be As part of verification and validation activity,sufficiently refined and detailed to provide quantitative the ILS engineer performs supportability verificationdata on the logistics support resource requirements. planning and gathers supportability verification/testThis is accomplished by doing a task analysis for data during Phase D. These data are used to identifyeach task in the task inventory. These requirements and correct deficiencies in the system design andare formally documented by amending the LSAR data ILSS, and to update the LSAR data tables. Duringtables. Together, ILSS trade studies, LSA models, Phase E, supportability testing and analyses areand LSAR data tables provide the project-level conducted under actual operational conditions. Thesesystem engineer with important life-cycle cost data data provide a useful legacy to product improvementand measures of (system) effectiveness (MoEs), efforts and future projects.which are successively refined through Phases C andD as the product system becomes better defined and 6.5.4 Continuous Acquisition and Life-Cyclebetter data become available. The relationship Supportbetween inputs (from the specialty engineeringdisciplines) to the LSA process and its outputs can be LSA documentation and supporting LSARseen in Figure 27 (see Section 5.3). data tables contain large quantities of data. Making In performing the LSA, the ILS engineer also use of these data in a timely manner is currentlydetermines and documents (in the LSAR data tables) difficult because changes occur often and rapidlythe logistics resource requirements for Phase D during definition, design, and development (Phases Bsystem integration and verification, and deployment through D). Continuous Acquisition and Life-Cycle(e.g., launch). For most spacecraft, this support Support (CALS)—changed in 1993 from Computer-includes pre-launch transportation and handling, Aided Acquisition and Logistics Support—technology
  • NASA Systems Engineering Handbook Page 91can reduce this dilemma by improving the digital design stability is achieved), thus reducing the risk ofexchange of data across NASA field centers and costly mistakes. Faster vendor response time alsobetween NASA and its contractors. Initial CALS means reduced spares inventories during operations.efforts within the logistics engineering communityfocused on developing CALS digital data exchange 6.6 Verificationstandards; current emphasis has shifted to databaseintegration and product definition standards, such as Verification is the process of confirming thatSTEP (Standard for the Exchange of Product) Model deliverable ground and flight hardware and softwareData. are in compliance with functional, performance, and CALS represents a shift from a paper- (and design requirements. The verification process, whichlabor-) intensive environment to a highly automated includes planning, requirements definition, andand integrated one. Concommitant with that are compliance activities, begins early and continuesexpected benefits in reduced design and development throughout the project life cycle. These activities aretime and costs, and in the improved quality improved an integral part of the systems engineering process.quality of ILS products and decisions. CALS cost At each stage of the process, the system engineerssavings accrue primarily in three areas: concurrent job is to understand and assess verification results,engineering, configuration control, and ILS functions. and to lead in the resolution of any anomolies. ThisIn a concurrent engineering environment, NASAs section describes a generic NASA verification processmulti-disciplinary PDTs (which may mirror and work that begins with a verification program concept andwith those of a system contractor) can use CALS continues through operational and disposaltechnology to speed the exchange of and access to verification. Whatever process is chosen by thedata among PDTs. Availability of data through CALS program/project should be documented in the SEMP.on parts and suppliers also permits improved partsselection and acquisition. (See Section 3.7.2 for more The objective of the verification program is toon concurrent engineering.) ensure that all functional, performance, and design Configuration control also benefits from requirements (from Level I program/projectCALS technology. Using CALS to submit, process, requirements) have been met. Each project developsand track ECRs/ECPs can reduce delays in approving a verification program considering its cost, schedule,or rejecting them, along with the indirect costs that and risk implications. No one program can be applieddelays cause. Al-though concurrent engineering is to every project, and each verification activity andexpected to reduce the number of ECRs/ECPs during product must be assessed as to its applicability to adesign and development (Phases C and D), their specific project. The verification program requirestimely disposition can produce significant cost considerable coordination by the verification engineer,savings. (See Section 4.7.2 for more on configuration as both system design and test organizations arecontrol.) typically involved to some degree throughout. Lastly, CALS technology potentially enablesILS functions such as supply support to be performed 6.6.1 Verification Process Overviewsimultaneously and with less manual effort than atpresent. For example, procurement of design-stable Verification activities begin in Phase A of acomponents and spares can begin earlier (to allow project. During this phase, inputs to the projectsearlier testing); at the same time, provisioning for integrated master schedule and cost estimates areother components can be further deferred (until made as the verification program concept takes Can NASA Benefit from CALS? shape. These planning activities increase in Phase B with the refinement of requirements, costs, and The DoD CALS program was initiated in 1985, schedules. In addition, the systems requirements are since 1988, it has been required on new DoD assessed to determine preliminary methods of systems. Ac-cording to Clark, potential DOD-wide verification and to ensure that the requirements can savings from CALS exceeds $160M (FY92$). be verified. The outputs of Phase B are expanded in However, GAO studies have been critical of DoDs Phase C as more detailed plans and procedures are CALS’ implementation. These criticisms focused prepared. In Phase D, verification activities increase on CALS limited ability to share information substantially; these activities normally include among users. qualification and acceptance verification, followed by For NASA field centers to realize savings from verification in preparation for deployment and CALS, new enabling investments in hardware, operational verification. Figures 32a and 32b show software, and training may be required. While this process through the NASA project life cycle. many of NASAs larger contractors have already (Safety reviews as applied to verification activities are installed CALS technology, the system engineer not shown as separate activities in the figures.) wishing to employ CALS must recognize that both GALS and non-CALS approaches may be needed The Verification Program Concept. A verification to interact with small business suppliers, and that pro-gram should be tailored to the project it supports. proprietary contractor data, even when digitized, The project manager/system engineer must work with needs to be protected. the verification engineer to develop a verification
  • NASA Systems Engineering Handbook Page 92program concept. Many factors need to be considered the verification program. Trade studies should bein developing this concept and the subsequent performed to support decisions about verificationverification program. These factors include: methods and re-quirements, and the selection of facility types and locations. As an example, a• Project type, especially for flight projects. trade study might be made to decide between Verification methods and timing depend on the performing a test at a centralized facility or at type of flight article involved (e.g., an experiment, several decentralized locations. payload, or launch vehicle). • Risk implications. Risk management must be• NASA payload classification. The verification considered in the development of the verification activities and documentation required for a program. Qualitative risk assessments and specific flight article generally depend upon its quantitative risk analyses (e.g., a FMECA) often NASA payload classification. As expected, the identify new concerns that can be mitigated by verification program for a Class A payload is additional testing, thus increasing the extent of considerably more comprehensive than that for a verification activities. Other risk assessments Class D payload. (See Appendix B.3 for contribute to trade studies that determine the classification guidelines.) preferred methods of verification to be used and• Project cost and schedule implications. when those methods should be performed. As an Verification activities can be significant drivers of example, a trade might be made between a projects cost and schedule; these implications performing a modal test versus determining should be considered early in the development of modal characteristics by a less costly, but less revealing, analysis. The project manager/system
  • NASA Systems Engineering Handbook Page 93 engineer must determine what risks are system performance prior to the next test/operation. acceptable in terms of the projects cost and Environmental testing is an individual test or series of schedule. tests conducted on flight or flight-configured hardware• Availability of verification facilities/sites and and/or software to assure it will perform satisfactorily transportation assets to move an article from one in its flight environment. Environmental tests include location to another (when needed). This requires vibration, acoustic, and thermal vacuum. coordination with the ILS engineer. Environmental testing may be combined with• Acquisition strategy (i.e., in-house development functional testing if test objectives warrant. or system contract). Often a NASA field center Verification by analysis is a process used in can shape a contractors verification process lieu of (or in addition to) testing to verify compliance to through the projects Statement of Work (SoW). specifications/requirements. The selected techniques• Degree of design inheritance and may include systems engineering analysis, statistics hardware/software reuse. and qualitative analysis, computer and hardware simulations, and com-puter modeling. Analysis mayVerification Methods and Techniques. The system be used when it can be determined that: (1) rigorousengineer needs to understand what methods and and accurate analysis is possible; (2) testing is nottechniques the verification engineer uses to verify feasible or cost-effective; (3) similarity is notcompliance with requirements. In brief, these methods applicable; and/or (4) verification by inspection is notand techniques are: adequate. Verification by demonstration is the use of• Test actual demonstration techniques in conjunction with• Analysis requirements such as maintainability and human• Demonstration engineering features. Verification by similarity is the• Similarity process of assessing by review of prior acceptance data or hardware configuration and applications that• Inspection the article is similar or identical in design and• Simulation manufacturing process to another article that has• Validation of records. previously been qualified to equivalent or more stringent specifications. Verification by inspection is Analyses and Models the physical evaluation of equipment and/or documentation to verify design features. Inspection is Analyses based on models are used extensively used to verify construction features, workmanship, throughout a program/project to verify and and physical dimensions and condition (such as determine compliance to performance and design cleanliness, surface finish, and locking hardware). requirements. Most verification requirements that Verification by simulation is the process of verifying cannot be verified by a test activity are verified design features and performance using hardware or through analyses and modeling. The analysis and software other than flight items. Verification by modeling process begins early in the project life validation of records is the process of using cycle and continues through most of Phase D; manufacturing records at end-item acceptance to these analyses and models are updated verify construction features and processes for flight periodically as actual data that are used as inputs hardware. become available. Often, analyses and models are validated or corroborated by the results of a Verification Stages. Verification stages are defined test activity. Any verification-related results should periods of verification activity when different be documented as part of the projects archives. verification goals are met. In this handbook, the following verification stages are used for flight systems: Verification by test is the actual operation of • Developmentequipment during ambient conditions or when • Qualificationsubjected to specified environments to evaluateperformance. Two subcategories can be defined: • Acceptancefunctional testing and environmental testing. • Preparation for deployment (also known as preFunctional testing is an individual test or series of launch)electrical or mechanical performance tests conducted • Operational (also known as on-orbit or in-flight)on flight or flight-configured hardware and/or software • Disposal (as needed).at conditions equal to or less than designspecifications. Its purpose is to establish that the The development stage is the period duringsystem performs satisfactorily in accordance with which a new project or system is formulated anddesign and performance specifications. Functional implemented up to the manufacturing of qualificationtesting generally is performed at ambient conditions. or flight hardware. Verification activities during thisFunctional testing is performed before and after each stage (e.g., breadboard testing) provide confidenceenvironmental test or major move in order to verify that the system can accomplish mission
  • NASA Systems Engineering Handbook Page 94goals/objectives. When tests are conducted during should be in accordance with project milestones, andthis stage, they are usually performed by the design should be updated as verification activities areorganization, or by the design and test organizations refined.together. Also, some program/project requirementsmay be verified or partially verified through the Verification Reportsactivities of the PDR and CDR, both of which occurduring this stage. Any development activity used to A verification report should be provided for eachformally satisfy program/project requirements should analysis and at a minimum, for each major testhave quality assurance oversight. activity, such as functional testing, environmental The qualification stage is the period during which testing, and end-to-end compatibility testingthe flight (protoflight approach) or flight-type hardware occurs over long periods of time or is separatedis verified to meet functional, performance. and by other activities, verification reports may bedesign requirements. Verifications during this stage needed for each individual test activity, such asare conducted on flight-configured hardware at functional testing, acoustic testing, vibrationconditions more severe than acceptance conditions to testing, and thermal vacuum/thermal balanceestablish that the hardware wild perform satisfactorily testing. Verification reports should be completedin the flight environments with sufficient margin. The within a few weeks following a test, and shouldacceptance stage is the period during which the provide evidence of compliance with thedeliverable flight end-item is shown to meet verification requirements for which it wasfunctional, performance, and design requirements conducted. The verification report should includeunder conditions specified for the mission. The as appropriate:acceptance stage ends with shipment of the flight Verification objectives and degree to which theyhardware to the launch site. were met The preparation for deployment stage begins with Description of verification activitythe arrival of the flight hardware and/or software at the Test configuration and differences from flightlaunch site and terminates at launch. Requirements configurationverified during this stage are those that demand the Specific result of each test and each procedureintegrated vehicle and/or launch site facilities. The including annotated tests • Specific result of eachoperational verification stage begins at liftoff; during analysisthis stage, flight systems are verified to operate in Test performance data tables, graphs,space environment conditions, and requirements illustrations, and picturesdemanding space environments are verified. The Descriptions of deviations from nominal results,disposal stage is the period during which disposal problems/failures, approved anomaly correctiverequirements are verified. actions, and re-test activity Summary of non-conformance/discrepancy6.6.2 Verification Program Planning reports including dispositions Conclusion and recommendations relative to success of verification activity Verification program planning is an Status of support equipment as affected by testinteractive and lengthy process occurring during all Copy of as-run procedurephases of a project. but more heavily during Phase C. Authentication of test results and authorization ofThe verification engineer develops a preliminary acceptability.definition of verification requirements and activitiesbased on the program/project and missionrequirements. An effort should be made throughout aprojects mission and system definition to phrase During planning, the verification engineerrequirements in absolute terms in order to simplify also identifies the documentation necessary totheir verification. As the system and interface support the verification program. This documentationrequirements are established and refined, the normally includes: (1) a Verification Requirementsverification engineer assesses them to determine the Matrix (VRM), (2) a Master Verification Plan (MVP),appropriate method of verification or combination (3) a Verification Requirements and Specificationsthereof. These requirements and the method(s) of Document (VRSD), and (4) a Verificationverification are then documented in the ap-propriate Requirements Compliance Document (VRCD).requirements document. Documentation for test procedures and reports may Using the methods of verification to be also be defined. Because the system engineer shouldperformed for each verification stage, along with the be familiar with these basic elements of a verificationlevels (e.g., part, subsystem, system) at which the process, each of these is covered below.verifications are to be performed, and anyenvironmental controls (e.g., contamination) that must Verification Requirements Matrix. The Verificationbe maintained, the verification engineer outlines a Requirements Matrix (VRM) is that portion of apreliminary schedule of verification activities requirements document (generally a Systemassociated with development, qualification, and Requirements Document or Cl specification) thatacceptance of the system. This preliminary schedule defines how each functional, performance, and design
  • NASA Systems Engineering Handbook Page 95requirement is to be verified, the stage in which requirements for that verification activity only. Theverification is to occur, and (sometimes) the verification engineer develops the VRSD from anapplicable verification levels. The verification engineer understanding of the requirements, the verificationdevelops the VRM in coordination with the design, program concept, and the flight article. The VRSD issystems engineering, and test organizations. VRM baselined prior to the start of the verification activity.contents are tailored to each projects requirements, The heart of the VRSD is a data table that includesand the level of detail in VRMs may vary. The VRM is the following fields:baselined as a result of the PDR, and essentiallyestablishes the basis for the verification program. A A numerical designator assigned to each requirementsample VRM for a CI specification is shown in A statement of the specific requirement to be verifiedAppendix B.9. The "pass/fail" criteria and tolerances for each requirementMaster Verification Plan. The Master Verification Any constraints that must be observedPlan (MVP) is the document that describes the overall Any remarks to aid in the understanding of theverification program. The MVP provides the content requirementand depth of detail necessary to provide full visibility Location where the requirement will be verified.of all verification activities. Each major activity isdefined and described in detail. The plan The VRSD, along with flight article drawingsencompasses qualification, acceptance, pre-launch, and schematics, is the basis for the development ofoperational, and disposal verification activities for verification procedures, and is also used as one of theflight hardware and software. (Development stage bases for development of the Verificationverification activities are not normally documented in Requirements Compliance Document (VRCD).the plan, but may be documented elsewhere.) Theplan provices a general schedule and sequence of Verification Requirements Compliance Document.events for major verification activities. It also The Verification Requirements Compliance Documentdescribes test software, Ground Support Equipment (VRCD) provides the evidence of compliance to each(GSE), and facilities necessary to support the Level I through Level n design, performance, safety,verification activities. The verification engineer and interface requirement, and to each VRSDdevelops the plan through a thorough understanding requirement. The flowdown to VRSD requirementsof the verification program concept, the requirements completes the full requirements traceability.in the Program (i.e., Level I) Requirements Document Compliance with all the requirements ensures that(PRD), System/Segment (i.e., Level II) Requirements Level I requirements have been met.Document (SRD), and/or the CI specification, and the The VRCD defines, for each requirement,methods identified in the VRM of those documents. the method(s) of verification and correspondingAgain, the development of the plan requires that the compliance information for each method employed.verification engineer work closely with the design, The compliance information provides either the actualsystems engineering, and test organizations. A data, or a reference to the location of the actual datasample outline for this plan is illustrated in Appendix that shows compliance with the requirement. (TheB.10. document also shows any non-compliances by referencing the related Non-Compliance ReportVerification Requirements and Specifications (NCR) or Problem/Failure Report (P/FR); followingDocu -ment. The Verification Requirements and resolution of the anomaly, the document specifiesSpecifications Document (VRSD) defines the detailed appropriate re-verification information.) Therequirements and specifications for the verification of compliance information may reference a verificationa flight article, including the ground system/segment. report, an automated test program, a verificationThe VRSD specifies requirements and specifications procedure, an analysis report, or a test. The inputtingfor activities covering qualification through operational of compliance information into the complianceverification. Requirements are also defined for flight document occurs over a lengthy period of time, andsoftware verification after the software has been on large systems and payloads, the effort may beinstalled in the flight article. The VRSD should cover continuous. The information in the complianceverifications by all methods; some programs/projects, document must be up-to-date for the Systemhowever, use a document that defines only Acceptance Review(s) (SAR) and Flight Readinessrequirements to be satisfied by test. The VRSD Review (FRR). The compliance document is notshould include all requirements defined in Level I, II, baselined because compliance information is input toand III requirements documents plus derived the document throughout the entire project life cycle.requirements. The VRSD defines the acceptance It is, however, an extremely important part of thecriteria and any constraints for each requirement. The projects archives.VRSD typical]y identifies the locations where The heart of the Verification Requirementsrequirements will be verified. On large Compliance Document is also a data table with linksprograms/projects, a VRSD is normally developed for to the corresponding requirements. The VRCDeach verification activity/location (e.g., thermal- includes the following fields:vacuum testing), and is tailored to include
  • NASA Systems Engineering Handbook Page 96• A numerical designator assigned to each Layouts, schematics, or diagrams showing requirement identification, location, and interconnection of test• A numerical designator that defines the equipment, test articles, and measuring points document where the requirement is defined Identification of hazardous situations or operations• A statement of the specific requirement for which Precautions and safety instructions to ensure safety compliance is to be defined of personnel and prevent degradation of test articles• Verification method used to verify the and measuring equipment requirement Environmental and/or other conditions to be• Location of the data that show compliance with maintained with tolerances the requirement statement. This information Constraints on inspection or testing could be a test, report, procedure, analysis Special instructions for non-conformances and report, or other information that fully defines anomalous occurrences or results where the compliance data could be found. Specifications for facility, equipment maintenance, Retest information is also shown. housekeeping, certification inspection, and safety and• Any non-conformances that occurred during the handling requirements before, during, and after the verification activities total verification activity.• Any statements of compliance information as to any non-compliance or acceptance by means Test Readiness Reviews other than the method identified, such as a waiver. A Test Readiness Review (TRR) is held prior to each major test to ensure the readiness of allVerification Procedures. The verification procedures ground, flight, and operational systems to supportare documents that provide step-by-step instructions the performance of the test. A review of thefor performing a given verification activity. The detailed status of the facilities, Ground Supportprocedure is tailored to the verification activity that is Equipment (GSE), test design, software,to be performed to satisfy a requirement, and could procedures, and verification requirements isbe a test, demonstration, or any other verification- made. The test activities and schedule arerelated activity. The procedure is written to satisfy outlined and personnel responsibilities arerequirements defined by the VRSD, and is submitted identified. Verification emphasis is directed towardprior to the Test Readiness Review (TRR) or the start ensuring that all verification requirements thatof the verification activity in which the procedure is have been identified for the test have beenused. (See sidebar on TRRs.) included in the test design and procedures. Procedures are also used to verify theacceptance of facilities, electrical and mechanicalground support equipment, and special testequipment. The information generally contained in a The procedure may provide blank spaces forprocedure is as follows, but it may vary according to recording of results and narrative comments in orderthe activity and test article: that the completed procedure can serve as part of the verification report. The as-run and certified copy of theNomenclature and identification of the test article or procedure is maintained as part of the projectsmaterial archives.Identification of test configuration and any differ-encesfrom flight configuration 6.6.3 Qualification VerificationIdentification of objectives and criteria established forthe test by the applicable verification specifica tion Qualification stage verification activitiesCharacteristics and design criteria to be inspected or begin after completion of development of the flighttested, including values, with tolerances, for hardware designs, and include analyses and testingacceptance or rejection to ensure that the flight or flight-type hardware (andDescription, in sequence, of steps and operations to software) will meet functional and performancebe taken requirements in anticipated environmental conditions.Identification of computer software required Qualification tests generally are designed to subjectIdentification of measuring, test, and recording the hardware to worst case loads and environmentalequipment to be used, specifying range, accuracy, stresses. Some of the verifications performed toand type ensure hardware compliance to worst case loads andCertification that required computer test pro-grams/ environments are vibration/acoustic, pressure limits,support equipment and software have been verified leak rates, thermal vacuum, thermal cycling,prior to use with flight hardware electromagnetic interference and electromagneticAny special instructions for operating data recording compatibility (EMI/EMC), high and low voltage limits,equipment or other automated test equipment as and life time/cycling. During this stage, manyapplicable performance requirements are verified, while analyses and models are updated as test data are acquired. Safety requirements, defined by hazard
  • NASA Systems Engineering Handbook Page 97analysis reports, may also be satisfied by qualification Verifications performed during this stagetesting. ensure that no visible damage to the system has Qualification usually occurs at the occurred during shipment and that the systemcomponent or subsystem level, but could occur at the continues to function properly. If system elements aresystem level as well. When a project decides against shipped separately and integrated at the launch site,building dedicated qualification hardware, and uses testing of the system and system interfaces isthe flight hardware itself for qualification purposes, the generally required. If the system is integrated into aprocess is termed protoflight. Additional information carrier, the interface to the carrier must also beon protoflight testing is contained in MSFC-HDBK- verified. Other verifications include those that occur670, General Environmental Test Guidelines (GETG) following integration into the launch vehicle and thosefor Protoflight Instruments and Experiments. that occur at the launch pad; these are intended to ensure that the system is functioning and in its proper6.6.4 Acceptance Verification launch configuration. Contingency verifications and procedures are developed for any contingencies that The acceptance stage verification activities can be foreseen to occur during pre-launch andprovide the assurance that the flight hardware and countdown. These contingency verifications andsoftware are in compliance with all functional, procedures are critical in that some contingenciesperformance, and design requirements, and are ready may require a return of the launch vehicle or flightfor shipment to the launch site. The acceptance stage article from the launch pad to a processing facility.begins with the acceptance of each individualcomponent or piece part for assembly into the flight Software IV&Varticle and continues through the SAR. Some verifications cannot be performed after Some project managers/system engineers maya flight article, especially a large one, has been wish to add IV&V (Independent Verification andassembled and integrated (e.g., due to Validation) to the software verification program.inaccessability). When this occurs, these verifications IV&V is a process whereby the products of theare performed during fabrication and integration, and software development life cycle are independentlyare known as in-process tests. Acceptance testing, reviewed, verified, and validated by anthen, begins with in-process testing and continues organization that is neither the developer nor thethrough functional testing, environmental testing, and acquirer of the software. The IV&V agent shouldend-to-end compatibility testing. Functional testing have no stake in the success or failure of thenormally begins at the component level and continues software; the agent’s only interest should be toat the systems level, ending with all systems make sure that the software is thoroughly testedoperating simultaneously. All tests are performed in against its requirements.accordance with requirements defined in the VRSD. IV&V activities duplicate the project’s V&VWhen flight hardware is unavailable, or its use is activities step-by-step during the life cycle, withinappropriate for a specific test, simulators may be the exception that the IV&V agent does noused to verify interfaces. Anomalies occurring during informal testing If IV&V is employed formala test are documented on the appropriate reportingsystem (NCR or P/FR), and a proposed resolution 6.6.6 Operational and Disposal Verificationshould be defined before testing continues. Majoranomalies, or those that are not easily dispositioned, Operational verification provides themay require resolution by a collaborative effort of the assurance that the system functions properly in asystem engineer, and the design, test, and other (near-) zero gravity and vacuum environment. Theseorganizations. Where appropriate, analyses and verifications are performed through system activationmodels are validated and updated as test data are and operation, rather than through a verificationacquired. activity. Systems that are assembled on-orbit must have each interface verified, and must function6.6.5 Preparation for Deployment Verification properly during end-to-end testing. Mechanical interfaces that provide fluid and gas flow must be The pre-launch verification stage begins with verified to ensure no leakage occurs, and thatthe arrival of the flight article at the launch site and pressures and flow rates are within specification.concludes at liftoff. During this stage, the flight article Environmental systems must be verified. Theis processed and integrated with the launch vehicle. requirements for all operational verification activitiesThe launch vehicle could be the Shuttle, some other are defined in the VRSD.launch vehicle, or the flight article could be part of the Disposal verification provides the assurancelaunch vehicle. Verifications requirements for this that the safe deactivation and disposal of all systemstage are defined in the VRSD. When the launch site products and processes has occurred. The disposalis the Kennedy Space Center, the Operations and stage begins in Phase E at the appropriate time (i.e.,Maintenance Requirements and Specifications either as scheduled, or earlier in the event ofDocument (OMRSD) is used in lieu of the VRSD. premature failure or accident), and concludes when all mission data have been acquired and verifications
  • NASA Systems Engineering Handbook Page 98necessary to establish compliance with disposalrequirements are finished. Both operational and 6.7.2 Producibility Tools and Techniquesdisposal verification activities may also includevalidation assess meets—that is, assessments of the Manufacturing Functional Flow Block Diagramsdegree to which the system accomplished the desired (FFBDs). Manufacturing FFBDs are used in the samemission goals/objectives. way system FFBDs. described in Appendix B.7.1, are used. At the top level, manufacturing FFBDs6.7 Producibility supplement and clarify the systems manufacturing sequence. Producibility is a system characteristicassociated with the ease and economy with which a Risk Management Templates. The risk managementcompleted design can be transformed (i.e., fabricated, templates of DoD 4245.7M, Transition frommanufactured, or coded) into a hardware and/or Development to Production ...Solving the Risksoftware realization. While major NASA systems tend Equation, are a widely recognized series of risks, riskto be produced in small quantities, a particular responses, and lessons reamed from DoDproducibility feature can be critical to a systems cost- experience. These templates, which were designed toeffectiveness, as experience with the Shuttles reduce risks in production, can be tailored tothermal tiles has shown. individual NASA projects.6.7.1 Role of the Production Engineer Producibility Assessment Worksheets. These work-sheets, which were also developed for DoD, use The production engineer supports the a judg-ment- based scoring approach to help choosesystems engineering process (as a part of the multi- among alternative production methods. Seedisciplinary PDT) through an active role in Producibility Measurement for DoD Contracts.implementing specific design features to enhanceproducibility, and by performing the production Producibility Models. Producibility models are usedengineering analyses needed by the project. These in addressing a variety of issues such as assessingtasks and analyses include: the feasibility of alternative manufacturing plans, and estimating production costs as a part of life-cycle costPerforming the manufacturing/fabrication portion of management. Specific producibility models maythe system risk management program (see Section include:4.6). This is accomplished by conducting a rigorousproduction risk assessment and by planning effective • Scheduling models for estimating productionrisk mitigation actions. output, and for integrating system enhancementsIdentifying system design features that enhance and/or spares production into the manufacturingproducibility. Efforts usually focus on design sequencesimplification, fabrication tolerances, and avoidance of • Manufacturing or assembly flow simulations, e.g.,hazardous materials. discrete event simulations of factory activitiesConducting producibility trade studies to determine • Production cost models that include learning andthe most cost-effective fabrication/manufacturing production rate sensitivities. (See sidebar pageprocess 82.)Assessing production feasibility within projectconstraints. This may include assessing contractor Statistical Process Control/Design ofand principal subcontractor production experience Experiments. These techniques, long applied inand capability, new fabrication technology, special manufacturing to identify the causes of unwantedtooling, and production personnel training variations in product quality and reduce their effects,requirements. Identifying long-lead items and critical have had a rebirth under TQM. A collection ofmaterials currently popular techniques of this new qualityEstimating production costs as a part of life-cycle cost engineering is known as Taguchi methods. For first-management hand information on Taguchi methods, see his book,Developing production schedules Quality Engineering in Production Systems, 1989. ADeveloping approaches and plans to validate handbook approach to to some of these techniquesfabrication/manufacturing processes. can be found in the Navys Producibility Measurement Guidelines: Methodologies for Product Integrity. The results of these tasks and productionengineering analyses are documented in the 6.8 Social AcceptabilityManufacturing Plan with a level of detail appropriateto the phase of the project. The production engineer NASA systems must be acceptable to thealso participates in and contributes to major project society that funds them. The system engineer takesreviews (primarily PDR and CDR) on the above items, this into account by integrating mandated socialand to special interim reviews such as the Production concerns into the systems engineering process. ForReadiness Review (ProRR). some systems, these concerns can result in
  • NASA Systems Engineering Handbook Page 99significant design and cost penalties. Even when What is NEPA?social concerns can be met, the planning and analysisassociated with doing so can be time-consuming The National Environmental Policy Act (NEPA) of(even to the extent of affecting the projects critical 1969 declares a national environmental policy andpath), and use significant specialized engineering goals, and provides a method for accomplishingresources. The system engineer must include these those goals. NEPA requires an Environmentalcosts in high-level trade studies of alternative Impact Statement (EIS) for "major federal actionsarchitectures/designs. significantly affecting the quality of the human environment." Some environmental impact6.8.1 Environmental Impact reference docu-ments include: National Environmental Policy Act (NEPA) of NASA policy and federal law require all 1969, as amended (40 CFR 1500-1508)NASA actions that may impact the quality of the Procedures for Implementing the Nationalenvironment be executed in accordance with the Environmental Policy Act (14 CFR 1216.3)policies and procedures of the National Environmental Implementing the Requirements of the NationalPolicy Act (NEPA). For any NASA project or other Environmental Policy Act, NHB 8800.11major NASA effort, this requires that studies and Executive Order 11514, Protection and En-analyses be produced explaining how and why the hancement of Environmental Quality, March 5,project is planned, and the nature and scope of its 1970, as amended by Executive Order 11991,potential environmental impact. These studies must May 24, 1977be performed whether the project is conducted at Executive Order 12114, Environmental EffectsNASA Headquarters, a field center, or a contractor Abroad of Major Federal Actions, January 4,facility, and must properly begin at the earliest period 1979.of project planning (i.e., not later than Phase A).Findings, in the form of an Environmental Assessment(EA) and, if warranted, through the more thoroughanalyses of an Environmental Impact Statement Significant Impact (FONSI). The analyses performed(EIS), must be presented to the public for review and should identify the environmental effects of allcomment. (See sidebar on NEPA.) reasonable alternative methods of achieving the At the outset, some NASA projects will be of projects goals/objectives so that they may besuch a magnitude and nature that an EIS is clearly compared. The alternative of taking no action (i.e., notgoing to be required by NEPA, and some will clearly doing the project) should also be studied. Althoughnot need an EIS. Most major NASA projects, there is no requirement that NASA select thehowever, fall in between, where the need for an EIS is alternative having the least environmental impact,a priori unclear, in such cases an EA is prepared to there must be sufficient information available to makedetermine whether an EIS is indeed required. NASAs clear what those impacts would be, and to describeexperience since 1970 has been that projects in the reasoning behind NASAs preferred selection. Thewhich there is the release—or potential release— of environmental analyses are an integral part of thelarge or hazardous quantities of pollutants (rocket projects systems engineering process.exhaust gases, exotic materials, or radioactive The EA is the responsibility of the NASAsubstances), require an EIS. For projects in this Head-quarters Program Associate Administratorcategory, an EA is not performed, and the projects (PAA) responsible for the proposed project or action.analyses should focus on and support the preparation The EA can be carried out at Headquarters or at aof an EIS. NASA field center. Approval of the EA is made by the The NEPA process is meant to ensure that responsible PAA. Most often, approval of the EAthe project is planned and executed in a way that takes the form of a memorandum to the Associatemeets the national environmental policy and goals. Administrator (AA) for Management Systems andFirst, the process helps the system engineer shape Facilities (Code J) stating either that the projectthe project by putting potential environmental requires an EIS, or that it does not. If an EIS is foundconcerns in the forefront during Phase A. Secondly, to be necessary, a Notice of Intent (NOI) to preparethe process provides the means for reporting to the an EIS is written; if an EIS is found to bepublic the projects rationale and implementation unnecessary, a Finding of No Significant Impactmethod. Finally, it allows public review of and (FONSI) is written instead.comment on the planned effort, and requires NASA toconsider and respond to those comments. The Finding of No Significant Impact (FONSI). A FONSIsystem engineer should be aware of the following should briefly present the reasons why the proposedNEPA process elements. project or action, as presented in the EA, has been judged to have no significant effect on the humanEnvironmental Assessment (EA). An EA is a environment, and does not therefore require theconcise public document that serves to provide preparation of an EIS. The FONSI for projects andsufficient evidence and analyses for determining actions that are national in scope is published in thewhether to prepare either an EIS or a Finding of No Federal Register, and is available for public review for
  • NASA Systems Engineering Handbook Page 100a 30-day period. During that time, any supporting External review of the draft EIS is alsoinformation is made readily available on request. managed by the Code J AA. A notice announcing the release and availability of the draft EIS is published inNotice of Intent (NOI). A Notice of Intent to file an the Federal Register, and copies are distributed with aEIS should include a brief description of the proposed request for comments. Upon receipt of the draft, theproject or action, possible alternatives, the primary Environmental Protection Agency (EPA) also places aenvironmental issues uncovered by the EA, and notice in the Federal Register, and the date of thatNASAs proposed scoping procedure, including the publication is the date that all time limits related to thetime and place of any scoping meetings. The NOI is drafts release begin. A minimum of 45 days must beprepared by the responsible Headquarters PAA and allowed for comments. Comments from externalpublished in the Federal Register. It is also sent to reviewers received by the Code J AA will be sent tointerested parties. the office responsible for preparing the EIS. Each comment should be incorporated in the final EIS.Scoping. The responsible Headquarters PAA must The draft form of the final EIS, modified ascon-duct an early and open process for determining required by the review process just described, shouldthe scope of issues to be addressed in the EIS, and be forwarded to the Code J AA for a final reviewfor identifying the significant environmental issues. before printing and distribution. The final versionScoping is also the responsibility of the Headquarters should include satisfactory responses to allPAA responsible for the proposed project or action; responsible comments. While NASA need not yield tohowever, the responsible Headquarters PAA often each and every opposing comment, NASAs positionworks closely with the Code J AA. Initially, scoping should be rational, logical, and based on data andmust consider the full range of environmental arguments stronger than those cited by theparameters en route to identifying those that are commentors opposing the NASA views.significant enough to be addressed in the EIS. According to NHB 8800.11, ImplementingExamples of the environmental categories and the Pro-visions of the National Environmental Policyquestions that should be asked in the scoping Act (Section 309.b), "an important element in the EISprocess are contained in NHB 8800.11, Implementing process is in-volvement of the public. Earlythe Provisions of the National En-vironmental Policy involvement can go a long way toward meetingAct, Section 307.d. complaints and objections regarding a proposed action, and experience has taught that a fullyEnvironmental Impact Statement (EIS). The EA and informed and involved public is considerably morescoping elements of the NEPA process provide the supportive of a proposed action. When a proposedresponsible Headquarters PAA with an evaluation of action is believed likely to generate significant publicsignificant environmental effects and issues that must concern, the public should be brought in forbe covered in the EIS. Preparation of the EIS itself consultation in the early planning stages. If an EIS ismay be carried out by NASA alone, or with the warranted, the public should be involved both inassistance or cooperation of other government scoping and in the EIS review. Early involvement canagencies and/or a contractor. If a contractor is used, help lead to selection of the best alternative and to thethe contractor should execute a disclosure statement least public objection."prepared by NASA Headquarters indicating that thecontractor has no interest in the outcome of the Record of Decision (ROD). When the EIS processproject. has been completed and public review periods have The section on environmental consequences elapsed, NASA is free to make and implement theis the analytic heart of the EIS, and provides the basis decision(s) regarding the proposed project or action.for the comparative evaluation of the alternatives. The At that time, a Record of Decision (ROD) is preparedanalytic results for each alternative should be by the Headquarters PAA responsible for the projectdisplayed in a way that highlights the choices offered or action. The ROD becomes the official public recordthe decision maker(s). An especially suitable form is a of the consideration of environmental factors inmatrix showing the alternatives against the categories reaching the decision. The ROD is not published inof environmental impact (e.g., air pollution, water the Federal Register, but must be kept in the officialpollution, endangered species). The matrix is filled in files of the program/project in question and madewith (an estimate of) the magnitude of the available on request.environmental impact for each alternative andcategory. The subsequent discussion of alternatives 6.8.2 Nuclear Safety Launch Approvalis an extremely important part of the EIS, and shouldbe given commensurate attention. Presidential Directive/National Security NASA review of the draft EIS is managed by Council Memorandum-25 (PD/NSC-25) requires thatthe Code J AA. When submitted for NASA review, the flight projects calling for the use of radioactivedraft EIS should be accompanied by a proposed list of sources follow a lengthy analysis and review processfederal, state and local officials, and other interested in order to seek approval for launch. The nuclearparties. safety launch approval process is separate and distinct from the NEPA compliance process. While
  • NASA Systems Engineering Handbook Page 101there may be overlaps in the data-gathering for both, • Update and refine the project, flight system,the documentation required for NEPA and nuclear launch vehicle, and radioactive sourcesafety launch approval fulfill separate federal and descriptionsNASA requirements. While NEPA is to be done at the • Update and refine the radioactive source designearliest stages of the project, launch approval officially trade study developed during Phase Abegins with Phase C/D. • Assist DOE where appropriate in conducting a Phase A/B activities are driven by the preliminary assessment of the missions nuclearrequirements of the EA/EIS. At the earliest possible risk and environmental hazards.time (not later than Phase A), the responsibleHeadquarters PAA must undertake to develop the The launch approval engineer is alsoproject EA/EIS and a Safety Analysis/Launch responsible for coordinating the activities, interfaces,Approval Plan in coordination with the nuclear power and record-keeping related to mission nuclear safetysystem integration engineer and/or the launch vehicle issues. The following tasks are managed by theintegration engineer. A primary purpose of the EA/EIS launch approval engineer:is to ensure a comprehensive assessment of the • Develop the project EA/EIS and Safety Analy-sis/rationale for choosing a radioactive source. In Launch Approval Planaddition, the EA/EIS illuminates the environmental • Maintain a database of documents related toeffects of alternative mission designs, flight systems, EA/EIS and nuclear safety launch approval tasks.and launch vehicles, as well as the relative nuclear This database will help form and maintain thesafety concerns of each alternative. The launch audit trail record of how and why technicalapproval engineer ensures that the following specific decisions and choices are made in the missionrequirements are met during Phase A: development and planning process. Attention to this activity early on saves time and expense• Conduct a radioactive source design trade study later in the launch approval process when the that includes the definition, spacecraft design project may be called upon to explain why a impact evaluation, and cost trades of all particular method or alternative was given greater reasonable alternatives weight in the planning process.• Identify the flight system requirements that are • Provide documentation and review support as specific to the radioactive source appropriate in the generation of mission data and• For nuclear power alternatives, identify flight trade studies required to support the EA/EIS and system power requirements and alternatives, and safety analyses define the operating and accident environments • Establish a project point-of-contact to the launch to allow DOE (U.S. Department of Energy) to vehicle integration engineer, DOE, and NASA assess the applicability of existing nuclear power Headquarters regarding support to the EA/EIS system design(s). and nuclear safety launch approval processes. This includes responding to public and During Phase B, activities depend on the Congressional queries regarding radioactivespecifics of the projects EA/EIS plan. The responsible source safety issues, and supporting proceedingsHeadquarters PAA determines whether the resulting from any litigation that may occur.preparation and writing of the EA/EIS will be done at a • Provide technical analysis support as required forNASA field center, at NASA Headquarters, or by a the generation of accident and/or commandcontractor, and what assistance will be required from destruct environment for the radioactive sourceother field centers, the launch facility, DOE, or other safety analysis. The usual technique for theagencies and organizations. The launch approval technical analysis is a probabilistic riskengineer ensures that the following specific assessment (PRA). See Section 4.6.3.requirements are met during Phase B: 6.8.3 Planetary Protection
  • NASA Systems Engineering Handbook Page 102 The U.S. is a signatory to the United Nations protection requirements. This document is typicallyTreaty of Principles Governing the Activities of States reported on by the NASA PPO at a meeting of thein the Exploration and Use of Outer Space, Including Committee on Space Research (COSPAR) to informthe Moon and Other Celestial Bodies. Known as the other spacefaring nations of NASAs degree of"Outer Space" treaty, it states in part (Article IX) that compliance with international planetary protectionexploration of the Moon and other celestial bodies requirements.shall be conducted "so as to avoid their harmfulcontamination and also adverse changes in theenvironment of the Earth resulting from theintroduction of extraterrestrial matter." NASA policy(NMI 8020.7D) specifies that the purpose ofpreserving solar system conditions is for futurebiological and organic constituent exploration. It alsoestablishes the basic NASA policy for the protectionof the Earth and its biosphere from planetary andother extraterrestrial sources of contamination. The general regulations to which NASA flightprojects must adhere are set forth in NHB 8020.12B,Planetary Protection Provisions for RoboticExtraterrestrial Missions. Different requirements applyto different missions, depending on which solarsystem object is targeted and the spacecraft ormission type (flyby, orbiter, lander, sample-return,etc.). For some bodies (such as the Sun, Moon,Mercury), there are no outbound contaminationrequirements. Present requirements for the outboundphase of missions to Mars, however, are particularlyrigorous. Planning for planetary protection begins inPhase A, during which feasibility of the mission isestablished. Prior to the end of Phase A, the projectmanager must send a letter to the PlanetaryProtection Officer (PPO) within the Office of the AAfor Space Science stating the mission type andplanetary targets, and requesting that the mission beassigned a planetary protection category. Table 7shows the current planetary protection categories anda summary of their associated requirements. Prior tothe Preliminary Design Review (PDR) at the end ofPhase B. the project manager must submit to theNASA PPO a Planetary Protection Plan detailing theactions that will be taken to meet the requirements.The projects progress and completion of therequirements are reported in a Planetary ProtectionPre-Launch Report submitted to the NASA PPO forapproval. The approval of this report at the FlightReadiness Review (FRR) constitutes the finalapproval for the project and must be obtained forpermission to launch. An update to this report, thePlanetary Protection Post-Launch Report, is preparedto report any deviations from the planned mission dueto actual launch or early mission events. For samplereturn missions only, additional reports and reviewsare required: prior to launch toward the Earth, prior tocommitment to Earth reentry, and prior to the releaseof any extraterrestrial sample to the scientificcommunity for investigation. Finally, at the formallydeclared end-of-mission, a Planetary Protection End-of-Mission Report is prepared. This document reviewsthe entire history of the mission in comparison to theoriginal Planetary Protection Plan, and documents thedegree of compliance with NASAs planetary
  • NASA Systems Engineering Handbook Page 103Appendix A—Acronyms GOES Geosynchonous Orbiting EnvironmentalAcronyms are useful because they provide a Satelliteshorthand way to refer to an organization, a kind of GSE Ground Support Equipmentdocument, an activity or idea, etc. within a generally HQ NASA Headquartersunderstood context. Their overuse, however, can HST Hubble Space Telescopeinterfere with communications. The NASA Lexicon I&V Integration and Verificationcontains the results of an attempt to provide a ILS Integrated Logistics Supportcomprehensive list of all acronyms used in NASA ILSP I ntegrated Logistics Support Plansystems engineering. This appendix contains two ILSS Integrated Logistics Support Systemlists: The acronyms used in this handbook and the IOP Institutional Operating Planacronyms or some of the major NASA organizations. IRAS Infrared Astronomical Satellite IV&V Independent Verification and ValidationAA ssociate Administrator (NASA) IVA Intravehicular ActivitiesAPA llowance for Program Adjustment LEM Lunar Excursion Module (Apollo)ACWP ctual Cost of Work Performed LEO Low Earth OrbitAGE erospace Ground Equipment LMEPO Lunar/Mars Exploration Program OfficeAHP Analytic Hierarchy Process LMI Logistics Management InstituteBCWP Budgeted Cost of Work Performed LOOS Launch and Orbital Operations SupportBCWS Budgeted Cost of Work Scheduled LRU Line Replaceable UnitC/SCSC Cost/Schedule Control System LSA Logistics Support AnalysisCriteria LSAR Logistics Support Analysis RecordCALS Continuous Acquisition and Life-Cycle MDT Mean DowntimeSupport MCR Mission Concept ReviewCCB Configuration (or Change) Control Board MDR Mission Definition ReviewCDR Critical Design Review MESSOC Model for Estimating Space StationCER Cost Estimating Relationship OperationsCI Configuration Item MICM Multi-variable Instrument Cost ModelCIL Critical Items List MLDT Mean Logistics Delay TimeCoF Construction of Facilities MMT Mean Maintenance TimeCOSPAR Committee on Space Research MNS Mission Needs StatementCOTR Contracting Office Technical Representative MoE Measure of (system) EffectivenessCPM Critical Path Method MRB Material Review BoardCR Change Request MRR Mission Requirements ReviewCSCI Computer Software Configuration Item MTBF Mean Time Between FailuresCSM Center for Systems Management MTTF Mean Time To FailureCWBS Contract Work Breakdown Structure MTTMA Mean Time To a Maintenance ActionDCR Design Certification Review MTTR Mean Time To Repair/RestoreDDT&E Design, Development, Test and Evaluation NAR Non-Advocate ReviewDoD (U.S.) Department of Defense NCR Non-Compliance (or Non-Conformance)DOE (U.S.) Department of Energy ReportDR Decommissioning Review NEPA National Environmental Policy ActDSMC Defense Systems Management College NHB NASA HandbookEA Environmental Assessment NMI NASA Management InstructionEAC Estimate at Completion NOAA (U.S.) National Oceanic and AtmosphericECP Engineering Change Proposal AdministrationECR Engineering Change Request NOI Notice of IntentEIS Environmental Impact Statement OMB Office of Management and BudgetEMC Electromagnetic compatibility OMRSD Operations and MaintenanceEMI Electromagnetic interference Requirements and Specifications Document (KSC)EOM End of Mission ORLA Optimum Repair Level AnalysisEPA (U.S.) Environmental Protection Agency ORR Operational Readiness ReviewEVA Extravehicular Activities ORU Orbital Replacement UnitEVM Earned Value Measurement P/FR Problem Failure ReportFCA Functional Configuration Audit PAA Program Associate Administrator (NASA)FFBD Functional Flow Block Diagram PAR Program/Project Approval ReviewFH Flight Hardware PBS Product Breakdown StructureFMEA Failure Modes and Effects Analysis PCA Physical Configuration AuditFMECA Failure Modes, Effects, and Criticality PDR Preliminary Design ReviewAnalysis PDT Product Development TeamFONSI Finding of No Significant Impact PDV Present Discounted ValueFRR Flight Readiness Review PERT Program Evaluation and Review TechniqueGAO General Accounting Office POP Program Operating Plan
  • NASA Systems Engineering Handbook Page 104PPAR Preliminary Program/Project Approval E. Broad St., Athens GA 30602Review GISS Goddard Institute for Space Studies (GSFC),PPO Planetary Protection Officer 2880 Broadway, New York NY 10025PRA Probabilistic Risk Assessment GSFC Goddard Space Flight Center, Greenbelt Rd.,PRD Program Requirements Document Greenbelt MD 20771ProRR Production Readiness Review HQ National Aeronautics and SpaceQA Quality Assurance Administration Headquarters, Washington DCQFD Quality Function Deployment 20546RAM Reliability, Availability, and Maintainability JPL Jet Propulsion Laboratory, 4800 Oak GroveRAS Requirements Allocation Sheet Dr., Pasadena CA 91109RID Review Item Discrepancy JSC Lyndon B. Jhonson Space Center, HoustonRMP Risk Management Plan TX 77058ROD Record of Decision KSC John F. Kennedy Space Center, KennedyRTG Radioisotope Thermoelectric Generator Space Center FL 32899SAR System Acceptance Review SCC Slidell Computer Complex, 1010 Gauss Blvd,SDR System Definition Review Slidell LA 70458SEB Source Evaluation Board SSC John C. Stennis Space Center, Stennis SpaceSEMP Systems Engineering Management Plan Center MS 39529SEPIT Systems Engineering Process Improvement STIF Scientific & Technical Information Facility,Task P.O. Box 8757, BWI Airport MD 21240SEWG Systems Engineering Working Group WFF Wallops Flight Facility (GSFC), Wallops(NASA)SI Le Systeme International d Unites (the Island VA 23337international [metric] system of units) WSTF White Sands Test Facility (JSC), P.O. DrawerSIRTF Space Infrared Telescope Facility MM, Las Cruces NM 88004SOFIA Stratospheric Observatory for InfraredAstronomySoSR Software Specification ReviewSoW Statement of WorkSSR System Safety ReviewSRD System/Segment Requirements DocumentSRM&QA Safety, Reliability, Maintainability,and Quality AssuranceSRR System Requirements ReviewSTEP Standard for the Exchange of Product(modeldata)STS Space Transportation SystemSSA Space Station AlphaSSF Space Station FreedomTBD To Be Determined; To Be DoneTDRS Tracking and Data Relay SatelliteTLA Time Line AnalysisTLS Time Line SheetTPM Technical Performance Measure(ment)TQM Total Quality ManagementTRR Test Readiness ReviewV&V Verification and ValidationVMP Verification Master PlanVRCD Verification Requirements ComplianceDocumentVRM Verification Requirements MatrixVRSD Verification Requirements and SpecificationsDocumentWBS Work Breakdown StructureWFD Work Flow DiagramNASA OrganizationsARC Ames Research Center, Moffett Field CA94035COSMIC Computer Software Management &Information Center, University of Georgia, 382
  • NASA Systems Engineering Handbook Page 105Appendix B—Systems Engineering Requirements/PlansTemplates and Examples 3.2 Integration System Test Plans 3.3 Compatibility with Supporting Activities 3.3.1 System Cost-EffectivenessAppendix B.1—A Sample SEMP Outline 3.3.2 Value Engineering 3.3.3 TQM/Quality Assurance An outline recommended by the Defense 3.3.4 Materials and ProcessesSystems Management College for the SystemsEngineering Management Plan is shown below. Thisoutline is a sample only, and should be tailored for thenature of the project and its inherent risks.Systems Engineering Management PlanTitle PageIntroductionPart 1 - Technical Program Planning and Control1.0 Responsibilities and Authority1.1 Standards, Procedures, and Training1.2 Program Risk Analysis1.3 Work Breakdown Structures1.4 Program Review1.5 Technical Reviews1.6 Technical Performance Measurements1.7 Change Control Procedures1.8 Engineering Program Integration1.9 Interface Control1.10 Milestones/Schedule1.11 Other Plans and ControlsPart 2 - Systems Engineering Process2.0 Mission and Requirements Analysis2.1 Functional Analysis2.2 Requirements Allocation2.3 Trade Studies2.4 Design Optimization/Effectiveness Compatibility2.5 Synthesis2.6 Technical Interface Compatibility2.7 Logistic Support Analysis2.8 Producibility Analysis2.9 Specification Tree/Specifications2.10 Documentation2.11 Systems Engineering ToolsPart 3—Engineering Specialty/IntegrationRequirements3.1 Integration Design/Plans3.1.1 Reliability3.1.2 Maintainability3.1.3 Human Engineering3.1.4 Safety3.1.5 Standardization3.1.6 Survivability/Vulnerability3.1.7 ElectromagneticCompatibility/Interference3.1.8 Electromagnetic Pulse Hardening3.1.9 Integrated Logistics Support3.1.10 Computer Resources LifecycleManagement Plan3.1.1 1 Producibility3.1.12 Other Engineering Specialty
  • NASA Systems Engineering Handbook Page 106Appendix B.2 -- A "Tailored" WBS for anAirborne Telescope Figure B- I shows a partial ProductBreakdown Structure (PBS) for the proposedStratospheric Observatory for Infrared Astronomy(SOFIA), a 747SP aircraft outfitted with a 2.5 to 3.0 mtelescope. The PBS has been elaborated for theairborne facilitys telescope element. The PBS levelnames have been made consistent with the sidebaron page 3 of this handbook. Figures B-2 through B-5 show acorresponding Work Breakdown Structures (WBSs)based on the principles in Section 4.3 of thishandbook. At each level, the prime productdeliverables from the PBS are WBS elements. TheWBS is completed at each level by adding neededservice (i.e., functional) elements such asmanagement, systems engineering, integration andtest, etc. The integration and test WBS element ateach level refers to the activities of unifying primeproduct deliverables at that level. Although the SOFIA project is used as anillustration in this appendix, the SOFIA WBS shouldbe tailored to fit actual conditions at the start of PhaseC/D as determined by the project manager. Oneexample of a condition that could substantially changethe WBS is international participation in the project.
  • NASA Systems Engineering Handbook Page 107
  • NASA Systems Engineering Handbook Page 108
  • NASA Systems Engineering Handbook Page 109
  • NASA Systems Engineering Handbook Page 110Appendix B.4—A Sample Risk ManagementPlanOutline1.0 Introduction1.1 Purpose and Scope of the RMP1.2 Applicable Documents and Definitions1.3 Program/Project (or System) Description2.0 Risk Management Approach2.1 Risk Management Philosophy/Overview2.2 Management Organization andResponsibilities2.3 Schedule, Milestones, and Reviews2.4 Related Program Plans2.5 Subcontractor Risk Management2.6 Program/Project Risk Metrics3.0 Risk Management Methodologies, Processes, andTools3.1 Risk Identification and Characterization3.2 Risk Analysis3.3 Risk Mitigation and Tracking4.0 Significant Identified Risks*4.1 Technical Risks4.2 Programmatic Risks4.3 Supportability Risks4.4 Cost Risks4.5 Schedule RisksEach subsection contains risk descriptions, charac -terizations, analysis results, mitigation actions, andreporting metrics .
  • NASA Systems Engineering Handbook Page 111
  • NASA Systems Engineering Handbook Page 112Appendix B.6—A Sample ConfigurationManagement Plan Outline1.0 Introduction1.1 Description of the Cls1.2 Program Phasing and Milestones1.3 Special Features2.0 Organization2.1 Structure and Tools2.2 Authority and Responsibility2.3 Directives and Reference Documents3.0 Configuration Identification3.1 Baselines3.2 Specifications4.0 Configuration Control4.1 Baseline Release4.2 Procedures4.3 CI Audits5.0 Interface Management5. 1 Documentation5.2 Interface Control6.0 Configuration Traceability6.1 Nomenclature and Numbering6.2 Hardware Identification6.3 Software and Firmware Identification7.0 Configuration Status Accounting andCommunications7.1 Data Bank Description7.2 Data Bank Content7.3 Reporting8.0 Configuration Management Audits9.0 Subcontractor/Vendor Control
  • NASA Systems Engineering Handbook Page 113Appendix B.7—Techniques of Functional TDRS continually to allow for the reception ofAnalysis emergency commands or transmission of emergency data? The FFBD also incorporates alternate and contingency operations, which improve the probability Appendix B.7 is reproduced from the of mission success. The flow diagram provides anDefense Systems Management Guide, published understanding of total operation of the system, servesJanuary 1990 by the Defense Systems Management as a basis for development of operational andCollege, Ft. Belvoir, VA. contingency proce-dures, and pinpoints areas where ••• changes in operational procedures could simplify the System requirements are analyzed to overall system operation. In certain cases, alternateidentify those functions which must be performed to FFBDs may be used to represent various means ofsatisfy the objectives of each functional area. Each satisfying a particular function until data are acquired,function is identified and described in terms of inputs, which permits selection among the alternatives.outputs, and interface requirements from top down sothat subfunctions are recognized as part of largerfunctional areas. Functions are arranged in a logical B.7.2 N 2 Diagramssequence so that any specified operational usage ofthe system can be traced in an end-to-end path. The N 2 diagram has been used extensivelyAlthough there are many tools available, functional to develop data interfaces, primarily in the softwareidentification is accomplished primarily through the areas. However, it can also be used to developuse of 1) functional flow block diagrams (FFBDs) to hardware inter-faces. The basic N 2 chart is shown indepict task sequences and relationships, 2) N 2 Figure B-7. The system functions are placed on thediagrams to develop data interfaces, and 3) time line diagonal; the remainder of the squares in the N x Nanalyses to depict the time sequence of time-critical matrix represent the interface inputs and outputs.functions. Where a blank appears, there is no interface between the respective functions. Data flows in a clockwise direction between functions (e.g., the symbol F1 F2B.7.1 Functional Flow Block Diagrams indicates data flowing from function F1, to function F2). The data being transmitted can be defined in the The purpose of the FFBD is to indicate the appropriate squares. Alternatively, the use of circlessequential relationship of all functions that must be and numbers permits a separate listing of the dataaccomplished by a system. FFBDs depict the time interfaces as shown in Figure B-8. The clockwise flowsequence of functional events. That is, each function of data between functions that have a feedback loop(represented by a block) occurs following the can be illustrated by a larger circle called a controlpreceding function. Some functions may be loop. The identification of a critical function is alsoperformed in parallel, or alternate paths may be shown in Figure B-8, where function F4 has a numbertaken. The duration of the function and the time of inputs and outputs to all other functions in thebetween functions is not shown, and may vary from a upper module. A simple flow of interface data existsfraction of a second to many weeks. The FFBDs are between the upper and lower modules at functions F7function oriented, not equipment oriented. In other and F8. The lower module has complex interactionwords, they identify "what" must happen and do not among its functions. The N2 chart can be taken downassume a particular answer to "how" a function will be into successively lower levels to the hardware andperformed. software component functional levels. In addition to FFBDs are developed in a series of levels. defining the data that must be supplied across theFFBDs show the same tasks identified through interface, the N2 chart can pinpoint areas wherefunctional decomposition and display them in their conflicts could arise.logical, sequential relationship. For example, theentire flight mission of a spacecraft can be defined ina top level FFBD, as shown in Figure B-6. Each blockin the first level diagram can then be expanded to aseries of functions, as shown in the second leveldiagram for "perform mission operations." Note thatthe diagram shows both input (transfer to operationalorbit) and output (transfer to space transportationsystem orbit), thus initiating the interface identificationand control process. Each block in the second leveldiagram can be progressively developed into a seriesof functions, as shown in the third level diagram onFigure B-6. These diagrams are used both to developrequirements and to identify profitable trade studies.For example, does the spacecraft antenna acquire thetracking and data relay satellite (TDRS) only when thepayload data are to be transmitted, or does it track
  • NASA Systems Engineering Handbook Page 114
  • NASA Systems Engineering Handbook Page 115
  • NASA Systems Engineering Handbook Page 116 defining subsystem/component time requirements, time line analysis can be used to develop trade studies in areas other than time consideration (e.g., should the spacecraft location be determined by the ground network or by onboard computation using navigation satellite inputs? Figure B-10 is an example of a maintenance TLS which illustrates that availability of an item (a distiller) is dependent upon the completion of numerous maintenance tasks accomplished concurrently. Furthermore, it illustrates the traceability to higher level requirements by referencing the appropriate FFBD and requirement allocation sheet (RAS).B.7.3 Time Line Analysis Time line analysis adds consideration offunctional durations and is used to support thedevelopment of design requirements for operation,test and maintenance functions. The time line sheet(TLS) is used to perform and record the analysis oftime critical functions and functional sequences.Additional tools such as mathematical models andcomputer simulations may be necessary. Time lineanalysis is performed on those areas where time iscritical to the mission success, safety, utilization ofresources, minimization of down time, and/orincreasing availability. Not all functional sequencesrequire time line analysis, only those in which time isa critical factor. The following areas are oftencategorized as time critical: 1) functions affectingsystem reaction time, 2) mission turnaround time, 3)time countdown activities, and 4) functions requiringtime line analysis to determine optimum equipmentand/or personnel utilization. An example of a highlevel TLS for a space program is shown in Figure B-9. For time critical function sequences, the timerequirements are specified with associatedtolerances. Time line analyses play an important rolein the trade-off process between man and machine.The decisions between automatic and manualmethods will be made and will determine what timesare allocated to what subfunctions. In addition to
  • NASA Systems Engineering Handbook Page 117
  • NASA Systems Engineering Handbook Page 118
  • NASA Systems Engineering Handbook Page 119Appendix B.8 -- The Effect of Changes inORU MTBF on Space Station FreedomOperations The reliability of Space Station Freedoms(SSF) Orbital Replacement Units (ORUs) has aprofound effect on its operations costs. This reliabilityis measured by the Mean Time Between Failures(MTBF). One study of the effects, by Dr. William F.Fisher and Charles Price, was SSF ExternalMaintenance Task Team Final Report (JSC, July1990). Another, by Anne Accola, et al., shows theseeffects parametrically. Appendix B.8 excerpts thispaper, Sensitivity Study of SSF Operations Costs andSelected User Resources (presented at theInternational Academy of Astronautics Symposium onSpace Systems Costs Methodologies andApplications, May 1990). ••• There are many potential tradeoffs that canbe performed during the design stage of SSF. Manyof them have major implications for crew safety,operations cost, and achievement of mission goals.Operations costs and important non-cost operationsparameters are examined. One example of a specificarea of concern in design is the reliability of the ORUsthat comprise SSF. The implications of ORU reliabilityon logistics upmass and downmass to and from SSFare great, thus affecting the resources available forutilization and for other operations activities. Inaddition, the implications of reliability on crew timeavailable for mission accomplishment (i.e.,experiments) vs. station maintenance are important. The MTBF effect on operations cost isshown in Figure B-11. Repair and spares costs areinfluenced greatly by varying MTBF. Repair costs areinversely proportional to MTBF, as are replacementspares. The initial spares costs are also influenced byvariables other than MTBF. The combined sparescost, consisting of initial and replacement spares arenot as greatly affected as are repair costs. The five-year operations cost is increased by only ten percentif all ORU MTBF are halved. The total operations costis reduced by three percent if all ORU MTBF aredoubled. It would almost appear that MTBF is not asimportant as one would think. However, MTBF alsoaffects available crew time and available upmassmuch more than operations cost as shown in FiguresB-12 and B-13. Available crew time is a valuable commoditybecause it is a limited resource. Doubling the numberof ORU replacements (by decreasing the MTBF)increases the maintenance crew time by 50 percent,thus reducing the amount of time available to perform the SSF. Extra ORUs taken to orbit reduces availableuseful experiments or scientific work by 22 percent. upmass that could be used to take up experimentalBy halving the ORU replacements, the maintenance payloads. Essentially, by doubling the number of ORUcrew time decreases by 20 percent and the available replacements, the available upmass is driven to zero.crew time increases by eight percent. Conversely, halving the number of ORU replacements Available upmass is another valuable increases the available upmass by 30 percent.resource because a fixed number of Space Shuttleflights can transport only a fixed amount of payload to
  • NASA Systems Engineering Handbook Page 120 Doubling the number of ORU replacements would mean eight extra Space Shuttle flights would be needed over five years. Halving the ORU replacements would require two fewer Space Shuttle flights over five years. No attempt was made to assess the Space Shuttle capability to provide the extra flights or the design cost impacts to create the ORUs with the different reliabilities. Figure B-16 shows the effect of assessing the cost impact of the previous two figures and combining them with the five-year operations cost. The influence of MTBF is effectively doubled when Although the effects of MTBF on resourcesis interesting, it is a good idea to quantify theeffectiveness of the scenarios based on total cost tomaintain the nominal re- sources. Figure B-14 showsthe number of crew members needed each year tomaintain the available crew time. The figure showsthat to maintain the nominal available crew time afterdoubling the number of ORU replacements, theStation would need two extra crew members. Itshould be noted that no attempt was made to assessthe design capability or design cost impacts toaccommodate these extra crew members. Thesavings of crew due to halving the number of ORUreplacements is small, effectively one less crewmember for half the year. Figure B-15 shows the number of SpaceShuttle flights over five years needed to maintain thenominal available upmass. The Space Shuttle flightswere rounded upward to obtain whole flights.
  • NASA Systems Engineering Handbook Page 121the resources of available upmass and crew time aremaintained at their nominal values.
  • NASA Systems Engineering Handbook Page 122Appendix B.9 -- An Example of a VerificationRequirements Matrix Appendix B.9 is a small portion of theVerification Requirements Matrix (VRM) from theNational Launch System Level III SystemRequirements Document, originally published at theMarshall Space Flight Center. Its purpose here is toillustrate the content and one possible format for aVRM. The VRM coding key for this example is shownon the next page.
  • NASA Systems Engineering Handbook Page 123
  • NASA Systems Engineering Handbook Page 124Appendix B.10—A Sample MasterVerification Plan Outline1.0 Introduction1.1 Scope1.2 Applicable Documents1.3 Document Maintenance and Control2.0 Program/Project Description2.1 Program/Project Overview andVerification Master Schedule2.2 Systems Descriptions2.3 Subsystems Descriptions3.0 Integration and Verification (I&V)Organization and Staffing3.1 Program/Project ManagementOffices3.2 NASA Field Center I&VOrganizations3.3 International Partner I&VOrganizations3.4 Prime Contractor I&V Organization3.5 Subcontractor I&V Organizations4.0 Verification Team OperationalRelationships4.1 Verification Team Scheduling andReview Meetings4.2 Verification and Design Reviews4.3 Data Discrepancy Reporting andResolution Procedures5.0 Systems Qualification Verification5.1 Tests*5.2 Analyses5.3 Inspections5.4 Demonstrations6.0 Systems Acceptance Verification6.1 Tests*6.2 Analyses6.3 Inspections6.4 Demonstrations7.0 Launch Site Verification8.0 On-Orbit Verification9.0 Post-Mission/Disposal Verification10.0 Verification Documentation11.0 Verification Methodology12.0 Support Equipment12.1 Ground Support Equipment12.2 Flight Support Equipment12.3 Transportation, Handling, and OtherLogistics Support12.4 TDRSS/NASCOM Support13.0 Facilities* This section contains subsections for each type oftest,e.g., EMI/EMC, mechanisms, thermal/vacuum. Thisfur -therdivision by type applies also to analyses,inspections, and demonstrations.
  • NASA Systems Engineering Handbook Page 125Appendix C—Use of the Metric System Prefixes for SI Units Factor Prefix Sym. PronunciationC.1 NASA Policy 10 24 yotta Y YOTT-a (a as in about) 10 21 zetta Z ZETT-a (a as in about) It is NASA policy (see NM1 8010.2A and 10 18 exa E EX-a (a as in about)NHB 7120.5) to: 10 15 peta P PET-a (as in petal) 10 12 tera T TERR-a (as in terrace)Adopt the International System of Units, known by the 10 9 giga G GIGa (g as in giggle, a as ininternational abbreviation SI and defined by aboutANSI/IEEE Std 268-1992, as the preferred system of 10 6 mega M MEG-a (as in megaphone)weights and measurements for all major system 10 3 kilo k KILL-oh**development programs. 10 2 hecto* h HECK-toeUse the metric system in procurements, grants and 10 deka* da DECK-a (as in decahedron)business-related activities to the extent economically 1feasible. 10 -1 deci* d DESS-ih (as in decimal)Permit continued use of the inch-pound system of 10 -2 centi* c SENT-ih (as in centipede)measurement for existing systems. 10 -3 milli m MILL-ih (as in military)Permit hybrid metric and inch-pound systems when 10 -6 micro µ MIKE-roe (as in microphone)full use of metric units is impractical or will 10 -9 nano n NAN-oh (a as in ant)compromise safety or performance. 10 -12 pico p PEEK-oh 10 -15 femto f FEM-toeC.2 Definitions of Units 10 -18 atto a AT-toe (a as in hat) 10 -21 zepto z ZEP-toe (e as in step) Parts of Appendix C are reprinted from IEEE 10 -24 yocto y YOCK-toeStd 268-1992, American National Standard for Metric * The prefixes that do not represent 1000 raised toPractice, Copyright  1992 by the Institute of a power (that is hecto, deka, deci, and centi)Electrical and Electronics Engineers, Inc. The IEEE should be avoided where practical.disclaims any responsibility or liability resulting from ** The first syllable of every prefix is accented tothe placement and use in this publication. Information assure that the prefix will retain its identity.is reprinted with the permission of the EKE. Kilometer is not an exception. ••• Outside the United States, the comma is candela (cd) The candela is the luminous intensity, inwidely used as a decimal marker. In some a given direction, of a source that emitsapplications, therefore, the common practice in the monochromatic radiation of frequency 540 x 10 12 HzUnited States of using the comma to separate digits and that has a radiant intensity in that direction ofinto groups of three (as in 23,478) may cause 1/683 watt per steradian.ambiguity. To avoid this potential source of confusion,recommended international practice calls for kelvin (K) The kelvin, unit of thermodynamicseparating the digits into groups of three, counting tempera-ture, is the fraction 1/273.16 of thefrom the decimal point toward the left and the right, thermodynamic tem-perature of the triple point ofand using a thin space to separate the groups. In water.numbers of four digits on either side of the decimalpoint the space is usually not necessary, except for kilogram (kg) The kilogram is the unit of mass; it isuniformity in tables. equal to the mass of the international prototype of the kilogram. (The international prototype of the kilogramC.2.1 SI Prefixes is a particular cylinder of platinum-iridium alloy which The names of multiples and submultiples of is preserved in a vault at Sevres, France, by theSI units may be formed by application of the prefixes International Bureau of Weights and Measures.)and symbols shown in the sidebar. (The unit of mass,the kilogram, is the only exception; for historical meter (m) The meter is the length of the pathreasons, the gram is used as the base for traveled by light in a vacuum during a time interval ofconstruction of names.) 1 299 792 458 of a second.C.2.2 Base SI Units mole (mol) The mole is the amount of substance of a system which contains as many elementary entitiesampere (A) The ampere is that constant current as there are atoms in 0.012 kilogram of carbon-12.which, if maintained in two straight parallel conductors Note: When the mole is used, the elementary entitiesof infinite length, of negligible circular cross section, must be specified and may be atoms, molecules,and placed one meter apart in vacuum, would ions, electrons, other particles, or specified groups ofproduce between these conductors a force equal to 2 such particles.x 10 -7 newton per meter of length.
  • NASA Systems Engineering Handbook Page 126second (s) The second is the duration of 9 192 631 joule (J = N . m) The joule is the work done when the770 periods of the radiation corresponding to the point of application of a force of one newton istransition between the two hyperfine levels of the displaced a distance of one meter in the direction ofground state of the cesium-133 atom. the force.C.2.3 Supplementary SI Units lumen (lm = cd . sr) The lumen is the luminous flux emit-ted in a solid angle of one steradian by a pointradian (rad) The radian is the plane angle between source having a uniform intensity of one candela.two radii of a circle that cut off on the circumferencean arc equal in length to the radius. lux (lx = lm/m2) The lux is the illuminance produced by a luminous flux of one lumen uniformly distributedsteradian (sr) The steradian is the solid angle that, over a surface of one square meter.hav-ing its vertex in the center of a sphere, cuts off anarea of the surface of the sphere equal to that of a newton (N = kg . m/s 2 ) The newton is that forcesquare with sides of length equal to the radius of the which, when applied to a body having a mass of onesphere. kilogram, gives it an acceleration of one meter per second squared.C.2.4 Derived SI Units with Special Names ohm (Ω Ω = V/A) The ohm is the electric resistance In addition to the units defined in this be-tween two points of a conductor when a constantsubsection, many quantities are measured in terms of differ-ence of potential of one volt, applied betweenderived units which do not have special names—such these two points, produces in this conductor a currentas velocity in m/s, electric field strength in V/m, of one ampere, this conductor not being the source ofentropy in J/K. any electromotive force.becquerel (Bq = 1/s) The becquerel is the activity of ascal (Pa = N/m 2 ) The pascal [which, in thea radionuclide decaying at the rate of one preferred pronunciation, rhymes with rascal] is thespontaneous nuclear transition per second. pressure or stress of one newton per square meter.degree Celsius (°C = K) The degree Celsius is equal siemens (S = A/V) The siemens is the electricto the kelvin and is used in place of the kelvin for conduc-tance of a conductor in which a current of oneexpressing Celsius temperature defined by the ampere is produced by an electric potential differenceequation t= T- T0, where t is the Celsius temperature, of one volt.T is the thermodynamic temperature, and To = 273.15K (by definition). sievert (Sv = J/kg) The sievert is the dose equivalent when the absorbed dose of ionizing radiationcoulomb (C = A . s) Electric charge is the time multiplied by the dimensionless factors Q (qualityintegral of electric current; its unit, the coulomb, is factor) and N (product of any other multiplying factors)equal to one ampere second. stipulated by the International Commission on Radiological Protection is one joule per kilogram.farad (F = C/V) The farad is the capacitance of acapacitor between the plates of which there appears a tesla (T = Wb/m 2 ) The tesla is the magnetic fluxdifference of potential of one volt when it is charged density of one weber per square meter. In anby a quantity of electricity equal to one coulomb. alternative approach to defining the magnetic field quantities the tesla may also be defined as thegray (Gy = J/kg) The gray is the absorbed dose when magnetic flux density that produces on a one-meterthe energy per unit mass imparted to matter by length of wire carrying a current of one ampere,ionizing radiation is one joule per kilogram. (The gray oriented normal to the flux density, a force of oneis also used for the ionizing radiation quantities: newton, magnetic flux density being defined as anspecific energy imparted, kerma, and absorbed dose axial vector quantity such that the force exerted on anindex.) element of current is equal to the vector product of this element and the magnetic flux density.henry (H = Wb/A) The henry is the inductance of aclosed circuit in which an electromotive force of one volt (V = W/A) The volt (unit of electric potential differ-volt is produced when the electric current in the circuit ence and electromotive force) is the difference ofvaries uniformly at a rate of one ampere per second. electric potential between two points of a conductor carrying a constant current of one ampere, when thehertz (Hz = 1/s) The hertz is the frequency of a power dissipated between these points is equal toperiodic phenomenon of which the period is one one watt.second. watt (W = J/s) The watt is the power that represents a rate of energy transfer of one joule per second.
  • NASA Systems Engineering Handbook Page 127 applications. The kilowatthour is widely used,weber (Wb = V . s) The weber is the magnetic flux however, as a measure of electric energy. This unitthat, linking a circuit of one turn, produces in it an should not be introduced into any new areas, andelectromo-tive force of one volt as it is reduced to eventually, it should be replaced by the megajoule. Inzero at a uniform rate in one second. that same "temporary" category, the standard also defines the barn (1 lb = 10 -28 m 2 ) for cross section,C.2.5 Units in Use with SI the bar (1 bar = 10 5 Pa) for pressure, the curie (1 Ci = 3.7 x 10 10 Bq) for radionuclide activity, the roentgenTime The SI unit of time is the second. This unit is (1 R = 2.58 x 10 -4 C/kg) for X- and gamma-raypre-ferred and should be used if practical, particularly exposure, the rem for dose equivalent (1 rem = 0.01when technical calculations are involved. In cases Sv), and the rad (1 rd = 0.01 Gy) for absorbed dose.where time relates to life customs or calendar cycles,the minute, hour, day and other calendar units may benecessary. For example, vehicle speed will normallybe expressed in kilometers per hour.minute (min) 1 min = 60 shour (h) 1 h = 60 min = 3600 secday (d) 1 d = 24 h = 86 400 secweek, month, etc.Plane angle The SI unit for plane angle is the radian.Use of the degree and its decimal submultiples ispermissible when the radian is not a convenient unit.Use of the minute and second is discouraged exceptfor special fields such as astronomy and cartography.degree (°) 1°= (π/ 180) redminute () 1 = (1/60)° = (π/10 800) radsecond (") 1 " = (1/60) = (π/648 000) radArea The SI unit of area is the square meter (m 2 ).The hectare (ha) is a special name for the squarehectometer (hm 2 ). Large land or water areas aregenerally expressed in hectares or in squarekilometers (km 2 ).Volume The SI unit of volume is the cubic meter. Thisunit, or one of the regularly formed multiples such asthe cubic centimeter, is preferred. The special nameliter has been approved for the cubic decimeter, butuse of this unit is restricted to volumetric capacity, drymeasure, and measure of fluids (both liquids andgases). No prefix other than milli- or micro- should beused with liter. Mass The SI unit of mass is the kilogram. This unit,or one of the multiples formed by attaching an SIprefix to gram (g), is preferred for all applications. Themegagram (Mg) is the appropriate unit for measuringlarge masses such as have been expressed in tons.However, the name ton has been given to severallarge mass units that are widely used in commerceand technology: the long ton of 2240 lb, the short tonof 2000 lb, and the metric ton of 1000 kg (also calledtonne outside the USA) which is almost 2205 lb. Noneof these terms are SI. The term metric ton should berestricted to commercial usage, and no prefixesshould be used with it. Use of the term tonne isdeprecated.Others The ANSI/IEEE standard lists the kilowatthour(1 kWh = 3.6 MJ) in the category of "Units in Use withSI Temporarily". The SI unit of energy, the joule,together with its multiples, is preferred for all
  • NASA Systems Engineering Handbook Page 128C.3 Conversion FactorsOne of the many places a complete set ofconversion factors can be found is in ANSI/IEEE Std268-1992. The abridged set given here is taken fromthat reference. Symbols of SI units are given in boldface type and in parentheses. Factors with an asterisk(*) between the number and its power of ten are exactby definition. To conform with the internationalpractice,this section uses spaces -- rather than commas -- innumber groups.
  • NASA Systems Engineering Handbook Page 129
  • NASA Systems Engineering Handbook Page 130
  • NASA Systems Engineering Handbook Page 131Bibliography Sons, Inc., New York, 1991. Referred to onAir Force, Department of the, page(s) 1.Producibility Measurement Boden, Daryl, and Wiley J. Larson (eds),for DoD Contracts, Office of the Cost-EffectiveAssistant for Reli-ability, Space Mission Operations, publicationMaintainability, Manufacturing, and scheduledQuality, SAF/AQXE, Washington, DC, for August 1995. Referred to on page(s) 1.1992. Boehm, Barry W., A Spiral Model ofReferred to on page(s) 111. Software Develop-ment, Unmanned Space Vehicle Cost Model, and Enhancement", Computer, pp 61 -72,Seventh May 1988. Referred to on page(s) 13.Edition, SD TR-88-97, Air Force Systems Carter, Donald E., and Barbara StilwellCom-mand/ Baker,Space Division (P. Nguyen, et al.), Concurrent Engineering: The ProductAugust Development Environment for the 1990s,1994. Referred to on page(s) 81. Addison-Wesley Publishing Co., Inc.,Agrawal, Brij N., Design of Reading,Geosynchronous Spacecraft, MA, 1992. Referred to on page(s) 25.Prentice-Hall, Inc., Englewood Cliffs, NJ Chamberlain, Robert G., George Fox and07632, William H.1986. Referred to on page(s) 1. Duguette, A Design OptimizationArmstrong, J.E., and Andrew P. Sage, An Process forIntroduction to Space Station Freedom, JPL PublicationSystems Engineering, John Wiley & 90-23,Sons, New June 15, 1990. Referred to on page(s) 18.York, 1995. Referred to on page(s) 1. Chestnut, Harold, Systems EngineeringArmy, Department of the, Logistics Tools, JohnSupport Analysis Wiley & Sons, Inc., New York, 1965.Techniques Guide, Pamphlet No. 700-4, Referred toArmy on page(s) 1.Materiel Command, 1985. Referred to on , Systems Engineering Methods, Johnpage(s) 102. Wiley &Asher, Harold, Cost-Quantity Sons, Inc., New York, 1965. Referred to onRelationships in the page(s) 1.Airframe Industry, R-291, The Rand Churchman, C. West, Russell L. AckoffCorporation, and E. Leonard1956. Referred to on page(s) 82. Arnoff, Introduction to OperationsBarclay, Scott, et al., Handbook for Research, JohnDecision Analysis, Wiley & Sons, Inc., New York, 1957.Decisions and Designs, Inc., McLean, Clark, Philip, et al., How CALS CanVA, Improve the DoDSeptember 1977. Referred to on page(s) Weapon System Acquisition Process,43. PL207RI,Blanchard, B.S., and W.J. Fabrycky, Logistics Management Institute,Systems November 1992.Engineering and Analysis (2nd Edition), Referred to on page(s) 103.Prentice-Hall, Inc. Englewood Cliffs, NJ, Committee on Space Research, COSPAR1990. InformationReferred to on page(s) 1. Bulletin No. 38 (pp. 3-10), June 1967., System Engineering Management, John ReferredWiley & to on page(s) 115.
  • NASA Systems Engineering Handbook Page 132Defense, Department of, Transition from Engineering, EIA/IS-632, 2001Development Pennsylvania Ave.to Production, DoD 4245.7-M, 1985. NW, Washington, DC, 20006, DecemberReferred to 1994.on page(s) 40, 111. Referred to on page(s) 1, 4., Metric System, Application in New Executive Order 11514, Protection andDesign, DOD-STD- Enhancement of1476A, 19 November 1986. Environmental Quality, March 5, 1970, as, Logistics Support Analysis and amended by Executive Order 11991, MayLogistics Support 24,Analysis Record, MIL-STD-1388-1A/2B, 1977. Referred to on page(s) 112.revised Executive Order 12114, Environmental21 January 1993, Department of Defense. Effects AbroadReferred to on page(s) 86, 100, 102. of Major Federal Actions, January 4,, Failure Modes, Effects, and Criticality 1979.Analysis, Referred to on page(s) 112.MIL-STD-1629A, Department of Defense. Fisher, Gene H., Cost Considerations inRe-ferred Systemsto on page(s) 41. Analysis, American Elsevier PublishingDefense Systems Management College, Co. Inc.,Scheduling New York, 1971. (Also published as R-Guide for Program Managers, GPO 490-ASD,#008-020-01196-1, January 1990. The Rand Corporation, December 1970.), Test and Evaluation Management Referred to on page(s) 1, 67.Guide, GPO Forsberg, Kevin, and Harold Mooz, "The#008-020-01303-0, 1993. Relationship of System Engineering to the Project Cycle", Center,Integrated Logistics Support (ILS) for Systems Management, 5333 BetsyGuide, Ross Dr.,DTIC/NTIS #ADA 171 -087, 1993. Santa Clara, CA 95054; also available in,Systems Engineering Management AGuide, Commitment to Success, Proceedings ofDTIC/NTIS #ADA223- 168, 1994. Referred the firstto on annual meeting of the National Councilpage(s) 1, 40. for,Risk Management: Concepts and Systems Engineering and the 12thGuidance, annualGPO #008-020-01164-9, 1994. meeting of the American Society forDeJulio, E., SIMSYLS Users Guide, EngineeringBoeing Aerospace Management, Chattanooga, TN, 20-23Operations, February 1990. Referred to on Octoberpage(s) 87. 1991. Referred to on page(s) 20.de Neufville, R., and J.H. Stafford, Fortescue, Peter W., and John P.W. Starl;Systems Analysis for (eds),Engineers and Managers, McGraw-Hill, Spacecraft Systems Engineering, JohnNew York, Wiley and1971. Referred to on page(s) 1, 43. Sons Ltd., Chichester, England, 1991.Electronic Industries Association (EIA), ReferredSystems to on page(s) 1.
  • NASA Systems Engineering Handbook Page 133Gavin, Joseph G., Jr. (interview with), Hood, Maj. William C., A Handbook ofTechnology Supply InventoryReview, Vol. 97, No. 5, July 1994. Models, Air Force Institute ofReferred to on Technology, Schoolpage(s) 92. of Systems and Logistics, SeptemberGAO/AIMD-94-197R, September 30, 1994. 1987.Re-ferred Hughes, Wayne P., Jr. (ed.), Militaryto on page(s) 103. Modeling, MilitaryGoddard Space Flight Center, Goddard Operations Research Society, Inc., 1984.Multi-variable In-strument IEEE, American National Standard forCost Model (MICM), Research Note #90- Metric Practice,1, ANSI/IEEE Std 268- 1992 (supersedes IEEE StdResource Analysis Office (Bernard Dixonand 268-1979 and ANSI Z210.1-1976),Paul Villone, authors), NASA/GSFC, May American Na-tional1990. Standards Institute (ANSI), 1430Referred to on page(s) 81. Broadway,Green, A.E., and A.J. Bourne, Reliability New York, NY 10018. Referred to onTechnology, page(s)Wiley Interscience, 1972. 139.Griffin, Michael D., and James R. French, , IEEE Trial-Use Standard for ApplicationSpace andVehicle Design, AIAA 90-8, American Management of the Systems EngineeringInstitute of Process, IEEE Std 1220-1994., 345 E 47thAeronautics and Astronautics, c/o St.,TASCO, P.O. New York, NY 10017, February 28, 1995.Box 753, Waldorf, MD 20604 9961, 1990. Referred to on page(s) 1.Referred to on page(s) 1. Jet Propulsion Laboratory, The JPLHickman, J.W., et al., PRA Procedures SystemGuide, The Development Management Guide,American Nuclear Society and The Version 1, JPLInstitute of D-5000, NASA/JPL, November 15, 1989.Electrical and Electronics Engineers, Johnson Space Center, NASA SystemsNUREG/CR-2300, Washington, DC, EngineeringJanuary Process for Programs and Projects,1983. Referred to on page(s) 42. Version 1.O,Hillier, F.S. and G.J. Lieberman, JSC-49040, NASA/JSC, October 1994.Introduction to Opera-tions ReferredResearch, 2nd Edition, Holden-Day, Inc., to on page(s) xi, 22.1978. Kececioglu, Dimitri, and PantelisHodge, John, "The Importance of Cost Vassiliou, 1992-1994Considerations in Reliability, Availability, andthe Systems Engineering Process", in Maintainabilitythe NAL Software Handbook ReliabilityMonograph Series: Systems Engineering EngineeringPapers, Program, University of Arizona, 1992.NASA Alumni League, 922 Pennsylvania Referred toAve. on page(s) 95.S.E., Washington, DC 20003, 1990. Keeney, R.L., and H. Raiffa, DecisionsReferred to with Multiple Ob-jechres:on page(s) 14.
  • NASA Systems Engineering Handbook Page 134Preferences and Value Tradeoffs, John Human Development (Code ND), NASAWiley and Sons, Inc., New York, 1976. Headquarters, by DEF Enterprises, P.O.Referred Boxto on page(s) 76. 590481, Houston, TX 77259, March 1990.Kline, Robert, et al., The M-SPARE Referred to on page(s) 117.Model, Logistics Marshall Space Flight Center, ProgramManagement Institute, NS901R1, March Risk Analysis1990. Handbook, NASA TM-100311, ProgramReferred to on page(s) 87. PlanningLangley Research Center, Systems Office (R.G. Batson, author),Engineering NASA/MSFC,Handbook for In-House Space Flight August 1987.Projects, , Systems Engineering Handbook,LHB 7122.1, Systems Engineering Volume 1 —Division, Overview and Processes; Volume 2—NASA/LaRC, August 1994. Tools,Lano, R., The N 2 Chart, TRW Inc., 1977. Techniques, and Lessons Learned,Referred to MSFC-HDBK-1912, Systems Analysison page(s) 68, 127. andLarson, Wiley J., and James R. Wertz Integration Laboratory, Systems(eds), Space Analysis Division,Mission Analysis and Design (2nd NASA/MSFC, February 1991. Referred toedition), onpublished jointly by Microcosm, Inc., page(s) 75, 89.2601 Airport , Verification Handbook, Volume 1—Dr., Suite 23O, Torrance, CA 90505 and VerificationKluwer Process; Volume 2— VerificationAcademic Publishers, P.O. Box 17, 3300 DocumentationAA Examples, MSFC-HDBK-2221, SystemsDordrecht, The Netherlands, 1992. AnalysisReferred to and Integration Laboratory, Systemson page(s) 1. Integration, Reducing Space Mission Cost, Division, Systems Verification Branch,publication Februaryscheduled for November 1995. Referred 1994.to on , General Environmental Test Guidelinespage(s) 1. (GETG)Leising, Charles J., "System for Protoflight Instruments andArchitecture", in System Experiments,Engineering at JPL, course notes MSFC-HDBK-67O, Systems Analysis and(contact: Judy Integra-tionCobb, Jet Propulsion Laboratory), 1991. Laboratory, Systems IntegrationReferred Division,to on page(s) 11. Systems Verification Branch, FebruaryLexicon—Glossary of Technical Terms 1994. Re-ferredand to on page(s) 109.Abbreviations Including Acronyms and McCormick, Norman, Reliability and RiskSymbols, Analysis, Aca-demicDraft/Version 1.0, produced for the NASA Press, Orlando, FL, 1981.Program/Project Management Initiative, Miles, Ralph F., Jr. (ed.), SystemsOffice of Concepts—Lectures
  • NASA Systems Engineering Handbook Page 135on Contemporary Approaches to , NHB 5300.4 (1A-1), Reliability ProgramSystems, John Require-mentsWiley & Sons, New York, 1973. Referred for Aeronautical and Space Systemto on Contractors, Office of Safety and Missionpage(s) 1. Assurance (Code Q), NASA/HQ, January 1987. Referred to on page(s) 91, 92.Moore, N., D. Ebbeler and M. Creager, "A , NHB 5300.4 (1B), Quality ProgramMethodology Provisionsfor Probabilistic Prediction of Structural for Aeronautical and Space SystemFailures of Contractors,Launch Vehicle Propulsion Systems", Office of Safety and Mission AssuranceAmerican (Code Q),Institute of Aeronautics and NASA/HQ, April 1969. Referred to onAstronautics, 1990. page(s)Referred to on page(s) 89. 95.Morgan, M. Granger, and Max Henrion, , NHB 5300.4 (1E), MaintainabilityUncertainty: A Program Re-quirementsGuide to Dealing with Uncertainty in for Space Systems, Office of SafetyQuantitative and Mission Assurance (Code Q),Risk and Policy Analysis, Cambridge NASA/HQ,University March 1987. Referred to on page(s) 96,Press, Cambridge, England, 1990. 100.Morris, Owen, Systems Engineering and , NHB 7120.5 (with NMI 7120.4),Integration Management ofand Management for NASA Manned Major System Programs and Projects,Space Flight Office ofPrograms", in NAL Monograph Series: the Administrator (Code A), NASA/HQ,Systems NovemberEngineering Papers, NASA Alumni 1993 (revision forthcoming). Referred toLeague, 922 onPennsylvania Ave. S.E., Washington, DC page(s) ix, xi, 13, 14, 17, 99, 102, 139.20003, , NHB 8800.11, Implementing the1990. Referred to on page(s) 9, 11. RequirementsNASA Headquarters, Initial OSSA of the National Environmental Policy Act,Metrication Transition Office ofPlan, Office of Space Science and Management Systems and FacilitiesApplications (Code J),(Code S), NASA/HQ, September 1990. NASA/HQ, June 1988. Referred to on, NHB 1700.1B, NASA Safety Policy and page(s)Require-ments 113.Document, Office of Safety and Mission , NHB 8020.12B, Planetary ProtectionAssurance (Code Q), NASA/HQ, June Provisions1993. Re-ferred for Robotic Extraterrestrial Missions,to on page(s) 55. Office of Space, NHB 5103.6B, Source Evaluation Board Science (Code S), NASA/HQ,Hand-book, forthcoming. Referred toOffice of Procurement (Code H), on page(s) 115.NASA/HQ, , NHB 9501.2B, Procedures forOctober 1988. Referred to on page(s) 75. Contractor
  • NASA Systems Engineering Handbook Page 136Reporting of Correlated Cost andPerformance Data, Fi-nancial Navy, Department of the, ProducibilityManagement Division (Code BF), MeasurementNASA/HQ, Guidelines: Methodologies for ProductFebruary 1985. Referred to on page(s) 59. Integrity,, NMI 5350.1A, Maintainability and NAVSO P-3679, Office of the AssistantMaintenance SecretaryPlanning Policy, Office of Safety and for Research, Development, andMission Assurance Acquisition,(Code Q), NASA/HQ, September 26, 1991. Washington DC, August 1993. Referred toReferred to onon page(s) 99, 100. page(s) 112., NMI 7120.4 (with NHB 7120.5), Nuclear Regulatory Commission, U.S.,Management of (by N.H.Major System Programs and Projects, Roberts, et al.), Fault Tree Handbook,Office of the NUREG-0492, Office of NuclearAdministrator (Code A), NASA/HQ, RegulatoryNovember 1993 Research, January 1980. Referred to on(revision forthcoming). Referred to on page(s)page(s) ix, xi, 3, 94.13. Office of Management and Budget,, NMI 8010.1A, Classification of NASA Guidelines and Dis-countPayloads, Rates for Benefit-Cost Analysis ofSafety Division (Code QS), NASA/HQ, Federal1990. Referred Programs, Circular A-94, OMB, Octoberto on page(s) 38, 123. 1992., NMI 8010.2A, Use of the Metric System Referred to on page(s) 79.of Pace, Scott, U.S. Access to Space:Measurement in NASA Programs, Office Launch Vehicleof Safety and Choices for 1990-2010, The RandMission Quality (Code Q), NASA/HQ, Corporation,June 11, 1991. R-3820-AF, March 1990.Referred to on page(s) 139. Presidential Directive/National Security, NMI 8020.7D, Biological Contamination CouncilControl Memorandum-25 (PD/NSC-25), Scientificfor Outbound and Inbound Planetary ofSpacecraft, Office Technological Experiments withof Space Science (Code S), NASA/HQ, PossibleDecember 3, Large-Scale Adverse Environmental1993. Referred to on page(s) 115. Effects and, NMI 8070.4A, Risk Management Policy, Launch of Nuclear Systems into Space,Safety DecemberDivision (Code QS), NASA/HQ, undated. 14, 1977. Referred to on page(s) 114.Referred to Procedures for Implementing theon page(s) 37, 44. NationalNational Environmental Policy Act Environmental Policy Act (14 CFR(NEPA) of 1969, as 1216.3),amended (40 CFR 1500- 1508), Referred Referred to on page(s) 112.to on Reilly, Norman B., Successful Systemspage(s) 112. Engineering for
  • NASA Systems Engineering Handbook Page 137Engineers and Managers, Van Nostrand Propulsion Laboratory, May 1, 1990.Reinhold, Referred toNew York, 1993. Referred to on page(s) 1. on page(s) 76.Ruskin, Arnold M., "Project Management SP-7012 (NASA Special Publication), Theand System InternationalEngineering: A Marriage of System of Units; Physical Constants andConvenience", Conver-sionClaremont Consulting Group, La Canada, Factors, E.A. Mechtly, published by theCalifornia; presented at a meeting of the NASA Scientific and TechnicalSouthern Information Office,California Chapter of the Project 1973; U.S. Government Printing Office,Management (stockInstitute, January 9, 1991. Referred to on number 3300-00482).page(s) 11. Steuer, R., Multiple Criteria Optimization,Saaty, Thomas L., The Analytic Hierarchy John WileyProcess, and Sons, Inc., New York, 1986.McGraw-Hill, New York, 1980. Referred to Stewart, Rodney D., Cost Estimating, 2ndon ea., Johnpage(s) 75. Wiley and Sons, Inc., New York, 1991.Sage, Andrew P., Systems Engineering, Taguchi, Genichi, Quality Engineering inJohn Wiley & ProductionSons, New York, 1992. Referred to on Systems, New York: McGraw-Hill, 1989.page(s) 1. ReferredShishko, R., Catalog of JPL Systems to on page(s) 111.Engineering Tools Wagner, G. M., Principles of Operationsand Models (1990), JPL D-8060, Jet Research—Propulsion With Applications to ManagerialLaboratory, 1990. Decisions,, MESSOC Version 2.2 User Manual, JPL Prentice Hall, 1969.D-5749/Rev. B, Jet Propulsion Laboratory,October1990. Referred to on page(s) 81.Sivarama Prased, A.V., and N.Somasehara, The AHPfor Choice of Technologies: AnApplication",Technology Forecasting and SocialChange, Vol.38, pp. 151-158, September 1990.Smith, Jeffrey H., Richard R. Levin andElisabeth J.Carpenter, An Application ofMultiattributeDecision Analysis to the Space StationFreedomProgram— Case Study: Automation andRoboticsTechnology Evaluation, JPL Publication90-12, Jet
  • NASA Systems Engineering Handbook Page 138Index Cost-effectiveness 4, 5, 9, 10, 29, 91, 96,Advanced projects 13 99, 111Alpha—see Space Station Alpha in trade studies 67-69, 72, 74, 77Analytic Hierarchy Process (AMP) 75 Cost Estimating Relationship (CER) 80,Apollo 9, 10, 85 81, 88Architecture—see system architecture Cost/Schedule Control System (C/SCS)Audits 30, 48, 49, 58, 96 59Availability Critical Design Review (CDR) 18, 19, 45,measures of 62 52, 53, 56, 58,as a facet of effectiveness 83-85, 96 59, 96, 106models of 85-87, 95, 98 Critical Item/Issue List (CIL) 39, 44, 91,as part of the LSA 101 125Baselines 4, 8, 14, 17, 18, 27, 36, 45 Critical path 13, 33, 34control and management of 10, 21, 28-30, Decision analysis 39, 41, 7644-48, Decision sciences 7, 77126 Decision support package 10, 71in C/SCS 60 Decision trees 41, 70in reviews 50-54 Design engineer(ing) 6, 8, 28, 49, 77Bathtub curve 92, 93 Design-for-X 91Bayesian probability 40 Design reference mission—see referenceBudget cycle, NASA 18, 25, 26 missionBudgeting, for projects 31, 35, 37, 59 Design-to-cost—see design-to-life-cycleChange Control Board (CCB) costcomposition of 46 Design-to-life-cycle cost 80conduct of 46, 47 Design trades—see trade studiesChange Request (CR) 46-49, 64, 65, 71, Digraphs 39, 4180, 91, 95, 103 Discounting—see present discountedConcurrent engineering 21, 22, 25, 28, value29, 39, 103 Earned value 7, 31, 60, 62Configuration control 4, 8, 11, 20, 45-48, Effectiveness 4, 5, 9, 10126 facets of 83-85, 91Configuration management 10, 17, 29, in TPM 6144-48, 126 in trade studies 67-70, 100, 102Congress 3, 18, 25, 26, 114 Engineering Change Request (ECR)—Constraints 3, 5, 8-11, 18, 28, 35, 58, 67- see change70, 72, 77 requestContingency planning 39, 44 Engineering specialty disciplines 6, 91-Continuous Acquisition and Life-Cycle 115Support (CALS) in concurrent engineering 22-25103 in SEMP 28, 29, 119Control gates 13-22, 27, 45, 48, 49, 50-58, in trade studies 69, 77, 80, 8579 Environmental Assessment (EA) 112-114Cost (see also life-cycle cost) 4, 5, 9, 10 Environmental Impact Statement (EIS)account structure 27, 30, 31, 33 112 -114caps 35, 37 Estimate at Completion (EAC) 31, 60, 88estimating 80-83 Event trees 42fixed vs variable 37 Failure Modes and Effects Analysisin trade studies 67-69, 74, 77 (FMEA)—see Fail-ureoperations 81, 132-134 Modes, Effects, and Criticality Analysisoverruns 17, 77 Failure Modes, Effects, and Criticalityspreaders 82 Analysis
  • NASA Systems Engineering Handbook Page 139(FMECA) 39,41,91,95,98, 102, 105 Lexicon, NASA 117Fault avoidance 94 Life-cycle cost (see also cost) 8, 10, 77-Fault tolerance 95 83Fault tree 39, 42, 94 components of 77, 78Feasibility 14, 18, 21, 50, 62 controlling 79, 80, 83, 111Feasible design 5, 18 Linear programming 7, 72Figure of merit 74, 75 Logistics Support Analysis (LSA) 53, 91,Flight Readiness Review (FRR) 19, 53, 96-103, 11954, 96, 108, 115 Logistics Support Analysis RecordFreedom—see Space Station Freedom (LSAR) 98-103Functional Configuration Audit (FCA) 58, Lunar Excursion Module (LEM) 9296 Maintainability 80, 92, 96-99Functional Flow Block Diagram (FFBD) concept 9768, 98, 111, definition of 96127-129 models 98Functional redundancy 94 MaintenanceGalileo 94 plan 97Game theory 7 time lines 98, 129, 131Gantt chart 35, 36 Make-or-buy 29Goddard Space Flight Center (GSFC) 81 Margin 43, 44, 62-64 Marshall Space Flight Center (MSFC) x handbooks 75, 89, 109Hazard rate 92, 83 historical cost models 81Heisenberg—see uncertainty principle Material Review Board (MRB) 96Hubble Space Telescope 98 Mean Time Between Failures (M1BF) 80,IEEE 1, 42, 139, 141, 142 98, 132-134Improvement Mean Time To Failure (MTTF) 86, 92, 98continuous 64 Mean Time to Repair (or Restore) (MTTR)product or process 11, 20, 25, 47, 78, 86 86, 98Independent Verification and Validation Metric system(IV&V) 110 conversion factors for 142-144Inflation, treatment of 78, 79, 82 definition of units in 139 -141Institutional Operating Plan (IOP) 25 Military standards 41, 86, 100, 102Integrated Logistics Support (ILS) 17, 29, Mission analysis 730, 53, 78, 85-87, Mission assurance 9192, 99-103, 105, 119 Mission Needs Statement (MNS) 14, 17,concept 96, 97 45, 56plan 19, 87, 97-99 Models, mathematicalIntegration characteristics of good 72, 73conceptual 11 of cost 42, 80-83system 4, 11, 19, 22, 29, 30, 32, 33, 53 of effectiveness 42, 83-85Interface Markov 98requirements 9-11, 17, 19, 45, 52, 53, 68, pitfalls in using 72, 88119, programming 7, 72127 relationship to SEMP 29, 83control of 28, 64, 119, 126 types of 71, 72Jet Propulsion Laboratory (JPL) ix, x, 81 use in systems engineering 6, 7, 21, 67-Johnson Space Center (JSC) x, 43, 82 71, 87-89,Kennedy Space Center (KSC) 110 106, 129Learning curve 82 Monte Carlo simulation 39, 42, 69, 88, 89,Lessons learned 11, 19, 30, 39, 41, 94, 98 95
  • NASA Systems Engineering Handbook Page 140Multi-attribute utility theory 75, 76 models 111Network schedules 33-35, 42 Producing system (as distinct from theNon-Advocate Review (NAR) 14, 17, 18, product system)56 1, 27, 59, 64NASA Handbook (NHB) ix, xi, 13, 14, 17, Product Breakdown Structure (PBS) 27,55, 59, 75, 77, 30-33, 59, 61,79,91,95,96,99, 100, 102, 112, 113, 115, 120139 Product development process 8, 13, 20-NASA Management Instruction (NMI) ix, 22xi, 1, 3, 13, 37, Product development teams (PDT) 22,38, 44, 99, 10(), 115, 123, 139 25, 27, 56, 91Nuclear safety 114, 115 Product improvement 11, 78, 103Objective function 4, 10, 74 Product system 1, 27, 99Objectives, of a mission, project, or Program, level of hierarchy 3system 3, 4, 8, 11, Program/project control xi, 44, 59-6117, 28, 37, 38, 50, 51, 56, 67-71, 74-77, 83, Program Operating Plan (POP) 2591 ProjectOffice of Management and Budget (OMB) level of hierarchy 325, 79 management (see also systemOperations concept 9, 14, 17. 22, 68-70, management) xi,73, 77, 86, 101 27, 37, 46, 55, 59, 79Operations research 7 plan 17-19, 28-30, 48, 49Optimization 3, 6, 7, 9, 13, 67, 72, 80, 83, termination 49119 Project life cycleOptimum repair level analysis (ORLA) 98 NASA 13-20Orbital Replacement Unit (ORU) 80, 87, technical aspect of 20-2697, 98, 132-134 Protoflight 106Outer Space Treaty 115 Prototype 13Parametric cost estimation 80-83 QualityPartitioning—see interfaces of systems engineering process 64, 65,Payload 17, 20, 132 75classification 38, 93, 95, 105, 123 as a facet of effectiveness 84, 85nuclear—see nuclear safety Quality Assurance (QA) 6, 29, 49, 52, 91,PERT 33, 39, 42 94-96, 119Physical Configuration Audit (PCA) 48, Quality Function Deployment (QFD) 758, 96 Queueing theory 7Precedence diagram 34 Red team 49Preliminary Design Review (PDR) 17, 18, Reference mission 9, 6921, 22, 25, Reliability 6, 22, 80, 91-95, 98, 10045, 52,56,58,59,96, 106, 115 block diagrams 94 definition of 91 in effectiveness 72, 84-87, 132Present Discounted Value (PDV) 79, 80 engineering 41, 91-93Probabilistic Risk Assessment (PRA) 39, models 89, 94, 9542, 44, 76, 86, in SEMP 29, 93, 11994, 115 in TPM 61Probability distribution 5, 10, 38-43, 63, Reporting—see status reporting and88, 89, 92 assessmentProblem/Failure Reports (P/FR) 64, 95, Requirements 3, 6, 11, 14, 17, 22, 26, 28-96, 108, 110 30, 37, 45, 46,Producibility 111, 112 51-54,56, 58, 59, 64-68, 80, 92, 103
  • NASA Systems Engineering Handbook Page 141allocation of 11, 29, 129 Space Station Alpha 8, 11, 40, 80, 97analysis 7, 9, 83, 127 Space Station Freedom 76, 87, 132-134as part of the baseline 4, 8, 17, 18, 44, 45 Specialty disciplines — see engineeringdocumentation 14, 18, 38 specialty disci -plinesfunctional 9, 18, 50, 58, 61, 69, 77, 83, 95, Specifications 9, 17-19, 22, 25, 29-31, 45,102, 46, 49, 51,103 52, 56-58, 61,62, 64, 92, 100, 105, 107-110,interface 9, 17, 50, 57, 127 119margin 62 64 Status reporting and assessment 31, 58-performance 6, 9, 18, 28, 50, 58, 59, 61, 65, 8868, 70, Successive refinement, doctrine of 7 -11,73, 74, 77, 83, 95, 103 17-19, 27reviews 17, 45, 49, 50, 55-57 Supportability (see also Integratedrole of 27 Logistics Support)software 55-57 85-87, 91, 98-103specialty engineering 45, 92 risk 39, 43traceability 17, 28, 30, 48, 49, 57 Symbolic informationverification 22, 45, 57, 58, 61, 93, 95, 96, desirable characteristics of 48103 -111Reserves in systems engineering 27project 18, 37, 43, 44, 60, 88 System Acceptance Review (SAR) 19, 45,schedule 18, 35, 43 53, 108, 110Resource leveling 35 System architecture 6, 8, 11, 14, 17, 18,Resource planning—see budgeting 27, 31, 68, 69,Risk 72, 73, 77, 79, 83, 89, 100analysis 38, 39, 41, 42, 89 System engineeraversion 41, 76 role of 6, 22, 28, 30, 44, 45, 61, 91, 103identification and characterization 38, 39 dilemma of 6, 79, 83-41, 102 System management (see also projectmanagement 29, 37-44, 91, 92, 105, 111 management) 4,mitigation 38, 39, 42-44, 92 6templates 40, 111 Systems analysis, role of 6, 7, 61, 67types of 39, 40 Systems approach 7Safety reviews 17, 19, 52-56 Systems engineeringScheduling 33-35, 59-61 objective of 4-6S-curve, for costs 88 metrics 64, 65Selection rules, in trade studies 6, 10, 67- process xi, 5, 8-11, 20-25, 28-30, 33, 67-69, 73-77 70, 77,Simulations 29, 72, 87, 89, 95 79, 91, 96, 99, 103, 112SOFIA 120- 122 Systems Engineering Management PlanSoftware 3, 7, 13, 18-22, 45, 47, 48, 52-57, (SEMP) 17, 28-31,69, 78, 93, 38, 40, 63, 64, 70, 83, 86, 91, 93, 99, 10396, 98, 103, 105-111, 126, 127 Systems Engineering Processcost estimating 81 Improvement Taskin WBS 30, 32, 34 (SEPIT) team xi, 3, 20off-the-shelf systems engineering 35, 41, Systems Engineering Working Group75, 89, (SEWG) x, xi, 395 Taguchi methods 111Source Evaluation Board (SEB) 75 TailoringSpace Shuttle 9, 40, 44, 47, 93, 110, 111, of configuration management 45, 47125, 132, 133 by each field center 1
  • NASA Systems Engineering Handbook Page 142of effectiveness measures 83 Weibull distribution 92of product development teams 22, 25, 91 Work Breakdown Structure (WBS) 4, 17,of project cycle 13, 28 19, 27, 59, 80,of project plans 45 81, 119of project reviews 18 development of 30-33of project hierarchy 3 errors to avoid 32, 33of risk management 38, 39 example of 120-122of SEMP 29 and network schedules 34-36of systems engineering process metrics Work flow diagram 3464of verification 104, 105Technical Performance Measure(ment)(TPM)assessment methods for 45, 60, 61, 88relationship to effectiveness measures84relationship to SEMP 29, 63, 64, 119role and selection of 31, 39, 44, 61, 62Test(ing) (see also verification) 3, 6, 11,18, 22, 25, 33,43, 45, 49, 51, 53, 55, 57, 58, 61-63, 69, 80,81,91, 92, 94-100, 102-111Test Readiness Review (TRR) 19, 30, 57,]04, 109Total Quality Management (TQM) 7, 64,111, 119Trade studyin ILS 99-103process 9, 17, 18, 67-71, 77, 100in producibility 111progress as a metric 64, 65in reliability and maintainability 98reports 10, 18, 71in verification 105Trade tree 69, 70Uncertainty, in systems engineering 5, 6,20, 37-44, 69,79, 87-89Uncertainty principle 39Validation 11, 25, 28-30, 61, 96Variances, cost and schedule 60, 61Verification 4, 11, 17-19, 22, 29, 30, 45,103-111concept 105methods 105, 106relationship to status reporting 61, 64, 65reports 107requirements matrix 17, 19, 107, 135, 136stages 106Waivers 48, 53, 58, 108