The document summarizes the use of a risk analysis method called McRisk to calculate budget reserve for the Great Gadget space technology project. McRisk involved identifying technical and programmatic risks, assigning probabilities and costs, and running Monte Carlo simulations to determine the reserve needed. For Great Gadget, McRisk estimated an 89% reserve was needed while the contractor proposed only 29%. Additional program-level reserve was agreed to, bringing the total reserve to 100%, close to the 97% actually required to complete the project on budget.
Military Radar 2011 offers the most comprehensive and influential speaker panel yet in the ninth annual occurrence of this unique series. This year’s conference will go further than ever before to help you to achieve effective radar systems on air, sea and land, by interacting with the world’s foremost military and industry radar experts.
In the past year radar technology has continued to evolve rapidly. The next generation of radar technology such as AESA, multi-static, through-wall and dual band radar systems are now an operational reality driving new platform and retrofit development. In addition the demand for radar technology allowing 360 degree real time battlespace awareness has continued to increase driving research, development and procurement in the military radar sector.
Download the brochure now for further details
This three-day course is designed for engineers, scientists, project managers and other professionals who design, build, test or sell complex systems. Each topic is illustrated by real-world case studies discussed by experienced CONOPS and requirements professionals. Key topics are reinforced with small-team exercises. Over 200 pages of sample CONOPS (six) and templates are provided. Students outline CONOPS and build OpCons in class. Each student gets instructor’s slides; college-level textbook; ~250 pages of case studies, templates, checklists, technical writing tips, good and bad CONOPS; Hi-Resolution personalized Certificate of CONOPS Competency and class photo, opportunity to join US/Coalition CONOPS Community of Interest.
ATI Courses Professional Development Short Course Spacecraft Quality Assuranc...Jim Jenkins
Quality assurance, reliability, and testing are critical elements in low-cost space missions. The selection of lower cost parts and the most effective use of redundancy require careful tradeoff analysis when designing new space missions. Designing for low cost and allowing some risk are new ways of doing business in today's cost-conscious environment. This course uses case studies and examples from recent space missions to pinpoint the key issues and tradeoffs in design, reviews, quality assurance, and testing of spacecraft. Lessons learned from past successes and failures are discussed and trends for future missions are highlighted.
Research on Wind Power in the Built Environment by Case van DamNLandUSA
Presentation on urban wind in California by
Case van Dam, UC Davis. The presentation was part of the Urban Wind Roundtable at the Consulate General of the Netherlands in San Francisco, March 16, 2011.
This presentation is a general overview of the NASA Dryden Flight Research Center. The presentation covers our mission, facilities, core capabilities, partnerships and our general mission activities.
Military Radar 2011 offers the most comprehensive and influential speaker panel yet in the ninth annual occurrence of this unique series. This year’s conference will go further than ever before to help you to achieve effective radar systems on air, sea and land, by interacting with the world’s foremost military and industry radar experts.
In the past year radar technology has continued to evolve rapidly. The next generation of radar technology such as AESA, multi-static, through-wall and dual band radar systems are now an operational reality driving new platform and retrofit development. In addition the demand for radar technology allowing 360 degree real time battlespace awareness has continued to increase driving research, development and procurement in the military radar sector.
Download the brochure now for further details
This three-day course is designed for engineers, scientists, project managers and other professionals who design, build, test or sell complex systems. Each topic is illustrated by real-world case studies discussed by experienced CONOPS and requirements professionals. Key topics are reinforced with small-team exercises. Over 200 pages of sample CONOPS (six) and templates are provided. Students outline CONOPS and build OpCons in class. Each student gets instructor’s slides; college-level textbook; ~250 pages of case studies, templates, checklists, technical writing tips, good and bad CONOPS; Hi-Resolution personalized Certificate of CONOPS Competency and class photo, opportunity to join US/Coalition CONOPS Community of Interest.
ATI Courses Professional Development Short Course Spacecraft Quality Assuranc...Jim Jenkins
Quality assurance, reliability, and testing are critical elements in low-cost space missions. The selection of lower cost parts and the most effective use of redundancy require careful tradeoff analysis when designing new space missions. Designing for low cost and allowing some risk are new ways of doing business in today's cost-conscious environment. This course uses case studies and examples from recent space missions to pinpoint the key issues and tradeoffs in design, reviews, quality assurance, and testing of spacecraft. Lessons learned from past successes and failures are discussed and trends for future missions are highlighted.
Research on Wind Power in the Built Environment by Case van DamNLandUSA
Presentation on urban wind in California by
Case van Dam, UC Davis. The presentation was part of the Urban Wind Roundtable at the Consulate General of the Netherlands in San Francisco, March 16, 2011.
This presentation is a general overview of the NASA Dryden Flight Research Center. The presentation covers our mission, facilities, core capabilities, partnerships and our general mission activities.
In its sixth annual Symantec Disaster Recovery Study, Symantec found that organizations are struggling to manage disparate virtual, physical and cloud resources due to added complexity in protecting and recovering mission critical applications and data within those environments. Not only are virtual and cloud systems often not properly protected, but the study reveals a gap in downtime expectations and reality.
This presentaiton on Overall Equipment Effectiveness, Down Time Analytics and Assett Utilization was developed by me and a coleague during my tenure at ISS. Presentation was given to the Chattanooga, TN Chapter of the SME.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Hmielewski.arthur
1. Arthur B. Chmielewski
Project Manager
Jet Propulsion Laboratory
California Institute of Technology
Charles Garner
Senior Engineer
Jet Propulsion Laboratory
California Institute of Technology
Presentation at the 6th NASA Project Management Challenge
February 24, 2009
Copyright 2009 California Institute of Technology 1
3. No industry, including aerospace and NASA in particular, has an
accepted process for calculating budget reserve for projects.
Most projects select budget reserve at a level that is expected to be
acceptable to the sponsor and not based on engineering calculation. Most
projects tend to vastly overrun this artificial reserve.
The first part of this presentation describes the reserve calculation
method called McRisk which factors in the amount of technical and
programmatic risk of a project. The second part illustrates the different
aspects of McRisk in a case study.
3
4. Reserve
# of
under- • Reserve estimation errors are very
Missions
Studied
estimated high with exception of medium size
Mission type by
missions
Flagship
4 164%
( > $800M)
• Reserve estimation accuracy has not
Medium size
5 19% improved in the last 20 years
(< $300M)
Large Instruments
2 34%
• Reserve errors are not limited to one
(< $150M) organization, center, type of mission
System Technology
Experiments 3 131% • Only 3/38* missions did not exceed
(< $150M) the budget reserve allocation
Technology
Experiments 11 107%**
• Only 1/38 missions used a strict
(<$25M)
process to calculate the amount of
Small Technology
Experiments (<$5M) 13 315%** needed reserve
* 38 missions consisted of: 11 microgravity experiments, 13 In-Step experiments, 14 other NASA missions
** No reserve was planned for most technology experiments
4
5. 100%
Our case study -
Great Gadget
90%
Needed Reserve
ed
at
80% tim e
es erv ed
er es at
nd R im e
70% U st rv
re se
ve e
O R
60%
50%
40% Another project
30%
Using McRisk
20%
10% Legend:
NASA mission not using McRisk
10% 20% 30% 40% 50% 60% 70% 80% 90% 100% McRisk mission
Data from C. Garner, et al, 2002
R. Metzger, et al, 2003 Planned Reserve 5
R. Schmitz, 1991
6. Cost
End of
project
Final cost
Initial Loss
internal aversion
Preliminary estimate domain
Official cap Design
reserve
Confirmation
Sunk
Perceived cap End of costs
Proposal
study
(Anticipatory
delusion)
Scope
Time
Dr. D. Ullman, Making Robust Decisions, 2006 6
7. Most projects’ costs follow a “W” curve:
The cost estimates usually start with an internal organization’s cost estimate
which frequently is deemed too high by the management.
The management instructs the team to remove some scope and cut the cost
below the perceived sponsor’s threshold.
A proposal is prepared showing this low cost and low risk. Frequently the
reserve is shown to be 10-30%. The management and the team are very
optimistic about the cost performance. (Psychologists call this anticipatory
delusion.)
At the end of the initial study the cost estimate goes up and more scope is
removed to stay under the perceived cap. The project is confirmed for the
design phase with 10-30% budget reserve.
The project overruns but is allowed to continue because what economists call
“loss aversion”. The sponsor does not want to lose the benefit of “sunk costs”
and increases the project cap.
7
8. In 2002 one of the authors became the manager of a project consisting
of 3 space technology experiments – a type of mission which is
legendary for huge overruns. To learn from past history the authors
analyzed cost performance of 24 NASA new technology space
experiments*. The cost data were studied to find the most common
culprits for overruns and eliminate them from the current project.
Data showed 2 primary reasons for project overruns:
1. Poor cost estimating practices. In response to this conclusion, a study**
was initiated on how to improve cost estimating techniques. The results
of this study were shown at the Management Forum in 2004.
2. Budget reserves grossly incompatible with risk. This conclusion was
dealt with by developing a reserve estimating technique dubbed McRisk.
*“Microgravity Flight Projects Cost Study Report Summary”, C. Garner, A.B.Chmielewski, January 2002
**“Why Estimates aren’t Accurate and What to Do About It?”, D. Ullman, D. Persohn, A.B.Chmielewski, L. Leach, 2003
8
9. McRisk (Monte Carlo Risk) is a technique which allows calculating the amount of
needed budget reserve commensurate with risk of the project.
The technique was used to calculate the budget reserve of 7 space experiments.
One has been completed within its budget reserve
One has been completed and presented in this study
Four have yet to be launched
One was cancelled
Over the 5 years of the Project life-cycle the authors collected cost data on
technical risks, programmatic risks, risks predicted, risks mitigated, risks
materialized, schedule delays, new technology problems, engineering problems,
management problems, carrier problems, cost uncertainty and unknown unknowns.
The data were collected and analyzed in a joint, cooperative NASA/JPL/contractor
postmortem study.
9
10. The McRisk process was designed to expose Verify result
most risks and cost uncertainties experienced
by space missions. The process combats
Locate 80%
inherent optimism, anticipatory delusion confidence point
and anchoring effects which make most
missions overrun the initial cost estimates.
Create S-curve 5
Run Monte
Carlo code
Build McRisk
Table 4
Account for cost
uncertainty and 3 The next 5 charts
Unknown Unknowns
Build describe McRisk
programmatic
2 process in more detail
McRisk List
Build technical
McRisk List 1
10
11. • Identify ALL possible technical risks for each subsystem
– Use “Worry Generators” – Create a list of dozens of well known
risks for software, hardware and management. Use this list in
costing exercises.
– “Brain storming” sessions – Team members speculate on risks in
“rapid fire” format.
– Comments from experts – Interview experienced project managers.
– Comments from reviews – Document every comment at each
review and add to the list.
– Institutional past experience – Parse company’s “lessons learned”,
management papers, magazine articles.
11
12. Programmatic risks are tough to account for:
• Engineers dislike predicting programmatic risks
• Managers tend to be optimistic and believe that they will handle
programmatics. “I will not let the old problems happen to my project!”
• There is some historical documentation on past technical problems but
not on programmatic problems
• Many programmatic problems are created by the sponsor*
Account for programmatic risks by:
– Using Worry Generators
– Interviewing managers from outside of the project
– Putting on the team an experienced manager ready to retire
– Adding schedule risks
*E. Anderson et al, “The Success Triangle of Cost, Schedule, and Performance”, 2002 12
13. The project cost-to-complete estimate is not one point, it is not one cell
on an Excel spread sheet. The cost estimate is a range!
Add a risk item to the table to account for cost estimate uncertainty.
The uncertainty expressed in $K is a percentage of the estimated cost to
complete.
Miniumum = -5%
Nominal = +10%
Maximum = +15%
Unknown Unknowns can be assumed to be a percentage of the cost to
go and added as a risk item. This amount depends on how thorough is
the risk identification effort. 10-15% should be used even for the most
diligent risk identification process.
13
14. Grouping the risks helps to reduce complexity
232 risks were listed and coalesced into 54 for Great Gadget
Establish probability of occurrence for each risk
Estimate the minimum, nominal and maximum impact in $K if the
risks were to materialize
Example of One Row of McRisk Table
Not shown: consequence rating, consequence description, mitigation plan, risk to schedule and risk retirement date
Risk Item Risk Name Risk Description Probability Cost to Cost to Cost to Risk to
% Mitigate Mitigate Mitigate Max Reserves*
Min Most Likely $K $K
$K $K
16 Parts Low volume purchase, 80 50 125 125 100
Costs minimum buys, parts
failures result in
increased parts costs
*Risk to reserves=probability x most likely cost to mitigate
14
15. Run Monte Carlo code using the
McRisk table and create an S-curve.
The MC code performs thousands of
Monte Carlo S-Curve trials where risk item costs are
determined randomly within the min
to max triangular distribution cost
range specified in the McRisk table.
The probability of the risk from the
80% McRisk table is used to determine how
Probability of frequently the cost of the given risk is
Success included.
Use the 80% point on the S-curve to
determine the amount of reserve
needed. This is the point where there is
Reserve amount in $K enough dollars to cover the problems
that occur in 80% of the cases. (80% is a
frequently used point on the S-curve in
cost exercises. With experience
different probability will be selected
depending on the type of the mission.)
15
16. This case study attempts to illustrate
the ability of the McRisk method to
predict the necessary reserve. To put
the focus on McRisk and not on the
mission or any of its developers we
called the project - the Great Gadget
instead of referring to it by its actual
name.
The Great Gadget project was a good illustration of McRisk benefits and
shortcomings because:
• it has already launched,
• it experienced a plethora of issues,
• it had a mix of new technology, hardware, software, testing and integration
challenges.
16
17. • The Great Gadget was a $15M space technology validation
experiment developed by a very capable R&D contractor. The
contractor wanted to test in space its new technology for the
benefit of NASA missions.
• The Gadget weighed only a few kilograms, used only a few watts
and was shoe-box in size.
• The Gadget was roughly 30% mechanical, 60% electrical, and 10%
software.
• 20 engineers worked on the gadget at the peak of development.
• The Gadget flew on a DOD carrier as piggy-back payload.
17
18. • The Gadget was initially proposed by the contractor with 4%
budget reserve on the cost-to-complete. In frank postmortem
discussion with the contractor we heard:
– “We did not want to show a lot of risk. It would defeat the proposal.”
– “If the sponsor wanted more reserve he could put more money into
the coffers.”
• 30% reserve was suggested by many NASA managers during
project cost reviews.
• Based on analysis* of 22 space experiments which overran by an
average of “π” (315%), the 30%, let alone 4% reserve, seemed
grossly inadequate. The project desired to calculate the realistic
amount of budget reserve commensurate with risk.
*A.B.Chmielewski, C. Garner, “Analysis of Space Experiments Overruns”, Feb 2002
18
19. The Monte Carlo analysis of the table showed that the
necessary level of the reserve was 89% of the cost-to-
complete.
The contractor was very worried about showing such a high
project risk. The contractor reduced the risk list to 28 items
which required 29% of budget reserve.
Heated discussions took place on the subject of the right
amount of reserve and its relation to perceived risk and
confirmation of the project.
The postmortem showed that out of 26 risk items removed
by the contractor 60% did materialize 40% did not.
The actual amount of reserve that was necessary was 97% or
8% more than predicted by McRisk and 67% higher than the
industry standard 30%.
19
20. Risk Risk Risk Description Probability Cost to Cost to Cost to Risk to Reserves*
Item Name % Mitigate Min Mitigate Most Mitigate $K
$K Likely Max
$K $K
9 Parts Low volume purchase, 80 200 200 200 160
Costs minimum buys, parts
failures result in
increased parts costs
27 Memory Software does not fit 30 50 75 75 22.5
Usage into available memory
*Risk to reserves=probability x most likely cost to mitigate
20
21. Risk Risk Risk Description Probability Cost to Cost to Mitigate Cost to Risk to Reserves*
Item Name % Mitigate Most Likely Mitigate $K
Min $K Max
$K $K
15 Optics Misalignment 20 40 60 60 12
misalign discovered during
ment environmental testing
36 Carrier Analysis may be 50 50 100 200 50
Thermal required for active
Environ thermal control
ment
*Risk to reserves=probability x most likely cost
21
22. Risk analysis
80% Probability of 80% Prob.
Success of Success
Reserve required $K Reserve required $K
ORIGINAL JPL RISK LIST CONTRACTOR’S RISK LIST
54 ITEMS 28 ITEMS
Calculated needed reserve: 89% Calculated reserve: 29%
22
23. The Contractor initially proposed a
100% 97% 4% reserve.
89%
71% Additional
Program The Project MCRisk calculations
reserve showed that 89% reserve would be
needed.
The Contractor proposed much
smaller 29% reserve based on their
risk list.
29% A compromise was reached at the
Reserve program level. 29% reserve was
formally
accepted
assigned to the project but 71%
additional reserve (for a total of
4% 100%) was held by the Program with
approval by NASA HQ. This
Reserve initially Reserve Postmortem: additional program reserve
Total reserve
proposed calculated with reserve really
held by the prevented an overrun which in the
by contractor McRisk by the needed
program final tally was 97% (29%+68%).
project
23
24. 97% Postmortem: Total actual overrun
89% Needed reserve calculated by McRisk
62% Percentage of predicted risks that did occur
52% Average predicted risk likelihood
38% Percentage of predicted risks that did not occur
14% Percentage of postmortem unknown unknowns (risks not on the risk list)
The postmortem analysis showed that McRisk underestimated the reserve by
8%(89% vs. 97%)
The causes for the 8% underestimate were:
Staff underestimating the probability of an average risk by 10%
Staff underestimating the cost of mitigation of an average risk by 20%
The actual cost of mitigating unknown unknowns was 14% vs. 10% prediction
24
26. Programmatic risks accounted for 34% of all reserve
expenditures but they are frequently not included in
missions’ risk lists. The following risks were correctly
identified in the McRisk table for Great Gadget:
System engineering understaffing
Allowances for contract mods and adjustments
Allowances for differences between operations of companies and NASA
Reviews, action items, contracting pushups-conforming to NASA contractual
practices and flight processes
Launch delays
Loss of key personnel
Schedule delays and extension of environmental tests
26
27. A common programmatic risk is the speed of staffing and de-staffing
The Great Gadget experienced approximately 7% of labor cost growth
due to non-optimal staffing as shown on the chart by the light blue area
between the red and yellow curves
Tremendous credit must be given to the contractor that this amount was
only 7% despite many starts and stand-downs the project experienced
due NASA funding profiles
proposal CDR plan
27
28. McRisk not only correctly identified the need for approximately
90% reserve on the cost to go at Confirmation but also allowed to
engage the sponsor in factual discussions about risk and the
reserve needs.
Great credit goes to the NASA Program Manager who was very
engaged in the reserve discussions and calculations with the
project manager. The Program Manager recognized the amount of
risk involved in the project and recommended selection of one less
experiment in his program to make room for the large reserve. The
NASA HQ approved that approach. Without their support the
Great Gadget would have become another project that overran
available funds despite a cost conscientious contractor.
28
29. The McRisk method of calculating a project’s needed reserve allows
a statistical estimate of the needed reserve budget to be made that is
more consistent with a project’s actual developmental risk than
existing, more traditional methods.
McRisk method not only specifies the process for calculation of the
reserve but also provides a guide for collecting accurate risk data by
using historical information, accounting for cost uncertainty,
unknown unknowns, and human psychology that can greatly affect
the inputs.
29
30. Even quite accurate reserve calculation methods such as McRisk may
not be useful if the Upper Management and sponsors of space
projects are not supportive of missions which reveal the true picture
of risk and associated costs.
One way to combat overruns is to reward honest, quantitative and
upfront disclosure of risk and discourage “strategic
misrepresentation”*, unfounded optimism and simplistic guesses in
cost estimating and budget reserve determination.
”Underestimating Costs in Public Works Projects”, APA Journal, 2002, B. Flyvbjerg 30
31. The authors would like to thank the Contractor personnel for their hard work,
technical knowledge and perseverance without which the Great Gadget
would have never been launched no matter what the reserve level. We
would also like to thank the Contractor for engaging in frank discussions
with the authors regarding cost which made this presentation possible.
Many thanks to the Program Executive and the Program Manager whose
tough decisions and support allowed for full implementation of the
McRisk approach.
31