Getting To Done - A Master Class WorkshopGlen Alleman
The Principles, Processes, Practices, and Tools to Increase the Probability of successfully completing Project's On-Tiem, On-Budget, and Needed Capabilities
IMP & WBS - Getting Both Right is ParamountGlen Alleman
WBS is the starting point for program success. It tells us what DONE looks like in terms of deliverables.
Integrated Master Plan (IMP) tells us how the increasing maturity of the deliverables will be assessed at each Program Event.
Integrated Master Schedule (IMS) tells us the order of the Work Packages needed to produce this increasing maturity.
Control Account Plan (CAP) defines the authorized scope, budget, and period of performance for the work that produces the deliverables defined in the WBS, assessed in the IMP, and sequenced in the IMS.
The notion of integrating cost, schedule, technical performance, and risk is possible in theory. In practice care is needed to assure credible information is provided to the Program Manager.
PGCS 2019 Master Class Integrating SE with PPMGlen Alleman
The projects are managed as if they were merely complicated ‒ when in fact, they were complex.They are planned as if everything is known or knowable at the start ‒ when in fact, they involve high levels of reducible (Epistemic) and irreducible (Aleatory) uncertainty and resulting risk.Combining Systems Engineering and Project Management is a critical success factor in reducing these uncertainties, resulting in increased probability of program success.
Getting To Done - A Master Class WorkshopGlen Alleman
The Principles, Processes, Practices, and Tools to Increase the Probability of successfully completing Project's On-Tiem, On-Budget, and Needed Capabilities
IMP & WBS - Getting Both Right is ParamountGlen Alleman
WBS is the starting point for program success. It tells us what DONE looks like in terms of deliverables.
Integrated Master Plan (IMP) tells us how the increasing maturity of the deliverables will be assessed at each Program Event.
Integrated Master Schedule (IMS) tells us the order of the Work Packages needed to produce this increasing maturity.
Control Account Plan (CAP) defines the authorized scope, budget, and period of performance for the work that produces the deliverables defined in the WBS, assessed in the IMP, and sequenced in the IMS.
The notion of integrating cost, schedule, technical performance, and risk is possible in theory. In practice care is needed to assure credible information is provided to the Program Manager.
PGCS 2019 Master Class Integrating SE with PPMGlen Alleman
The projects are managed as if they were merely complicated ‒ when in fact, they were complex.They are planned as if everything is known or knowable at the start ‒ when in fact, they involve high levels of reducible (Epistemic) and irreducible (Aleatory) uncertainty and resulting risk.Combining Systems Engineering and Project Management is a critical success factor in reducing these uncertainties, resulting in increased probability of program success.
Capabilities‒Based Planning the capabilities needed to accomplish a mission or fulfill a business strategy
Only when capabilities are defined can we start with requirements elicitation
Delivering programs with less capability than promised, while exceeding the cost and planned durations, distorts decision making, contributes to increasing cost growth to other programs, undermines the Federal government’s credibility with taxpayers and contributes to the public’s negative support for these programs.
Many reasons have been hypothesized and documented for cost and schedule growth. The authors review some of these reasons, and propose that government and contractors use the historical variability of the past programs to establish cost and schedule estimates at the outset and periodically update these estimates with up-to-date risks, to increase the probability of program success. For this to happen, the authors recommend changes to estimating, acquisition and contracting processes.
Cost and schedule growth for complex projects is created when unrealistic technical performance expectations, unrealistic cost and schedule estimates, inadequate risk assessments, unanticipated technical issues, and poorly performed and ineffective risk management, contribute to project technical and programmatic shortfalls
Establishing schedule margin using monte carlo simulation Glen Alleman
The first order goal is to develop a resource loaded, risk tolerant, Integrated Master Schedule, derived from the Integrated Master Plan that clearly shows the increasing maturity of the program's deliverables, through vertical and horizontal traceability to the program's requirements.
Establishing the performance measurement baseline (pmi northern utah)(v1)Glen Alleman
The Performance Measurement Baseline is a time phase, budgeted description of all the work on a project to produce the needed capabilities that fulfill the business case or accomplish the mission
Increasing the Probability of Project Success with Five Principles and PracticesGlen Alleman
There are many approaches to managing projects in every domain.
This seminar lays the foundations for increasing the probability of project success, no matter the domain, what technology, what approach to delivering the outcomes of the project.
The principles of this approach are immutable.
The practices for implementing the principles are universally applicable.
Each chart in this presentation, contains guidance that can be applied to your project, no matter the domain.
In our short hour here, we’re going to cover a lot of material.
The bibliography contains the supporting materials we can tailor to your individual project
If a project manager is consumed with managing risk, there is little time to manage opportunities. Good risk management is not about fear of failure, it is about removing barriers to success. This is when opportunity management emerges.
SOLVING PROJECT ALLOCATION RESOURCE PROBLEMS WITH AEROSPACE ERPKevin West
Defence manufacturing is all about project manufacturing and project accounting. And that means enterprise resource planning (ERP) software for defence manufacturing must include robust functionality for project management and specifically project cost allocation.
Establishing the Performance Measurement BaselineGlen Alleman
The Performance Measurement Baseline is a time-phased schedule of all work to be performed, the budgeted cost for this work, and the organizational elements that produce the deliverables from this work.
What Makes a Good Concept of Operations?Glen Alleman
A Concept of Operations is a user-oriented document the describes system characteristics for a proposed systems from the User's perspective. The CONOPs also describes the user organization, mission, and objectives form the integrated systems point of view and is used to communicates overall qualitative and quantitative characteristics to the stakeholders.
From WBS to Integrated Master ScheduleGlen Alleman
A step by step guide to increasing the Probability of Program success starting with the WBS, developing the Integrated Master Plan and Integrated Master Schedule, risk adjusting the IMS, and measuring progress to plan in units of measure meaningful to the decision makers.
Capabilities‒Based Planning the capabilities needed to accomplish a mission or fulfill a business strategy
Only when capabilities are defined can we start with requirements elicitation
Delivering programs with less capability than promised, while exceeding the cost and planned durations, distorts decision making, contributes to increasing cost growth to other programs, undermines the Federal government’s credibility with taxpayers and contributes to the public’s negative support for these programs.
Many reasons have been hypothesized and documented for cost and schedule growth. The authors review some of these reasons, and propose that government and contractors use the historical variability of the past programs to establish cost and schedule estimates at the outset and periodically update these estimates with up-to-date risks, to increase the probability of program success. For this to happen, the authors recommend changes to estimating, acquisition and contracting processes.
Cost and schedule growth for complex projects is created when unrealistic technical performance expectations, unrealistic cost and schedule estimates, inadequate risk assessments, unanticipated technical issues, and poorly performed and ineffective risk management, contribute to project technical and programmatic shortfalls
Establishing schedule margin using monte carlo simulation Glen Alleman
The first order goal is to develop a resource loaded, risk tolerant, Integrated Master Schedule, derived from the Integrated Master Plan that clearly shows the increasing maturity of the program's deliverables, through vertical and horizontal traceability to the program's requirements.
Establishing the performance measurement baseline (pmi northern utah)(v1)Glen Alleman
The Performance Measurement Baseline is a time phase, budgeted description of all the work on a project to produce the needed capabilities that fulfill the business case or accomplish the mission
Increasing the Probability of Project Success with Five Principles and PracticesGlen Alleman
There are many approaches to managing projects in every domain.
This seminar lays the foundations for increasing the probability of project success, no matter the domain, what technology, what approach to delivering the outcomes of the project.
The principles of this approach are immutable.
The practices for implementing the principles are universally applicable.
Each chart in this presentation, contains guidance that can be applied to your project, no matter the domain.
In our short hour here, we’re going to cover a lot of material.
The bibliography contains the supporting materials we can tailor to your individual project
If a project manager is consumed with managing risk, there is little time to manage opportunities. Good risk management is not about fear of failure, it is about removing barriers to success. This is when opportunity management emerges.
SOLVING PROJECT ALLOCATION RESOURCE PROBLEMS WITH AEROSPACE ERPKevin West
Defence manufacturing is all about project manufacturing and project accounting. And that means enterprise resource planning (ERP) software for defence manufacturing must include robust functionality for project management and specifically project cost allocation.
Establishing the Performance Measurement BaselineGlen Alleman
The Performance Measurement Baseline is a time-phased schedule of all work to be performed, the budgeted cost for this work, and the organizational elements that produce the deliverables from this work.
What Makes a Good Concept of Operations?Glen Alleman
A Concept of Operations is a user-oriented document the describes system characteristics for a proposed systems from the User's perspective. The CONOPs also describes the user organization, mission, and objectives form the integrated systems point of view and is used to communicates overall qualitative and quantitative characteristics to the stakeholders.
From WBS to Integrated Master ScheduleGlen Alleman
A step by step guide to increasing the Probability of Program success starting with the WBS, developing the Integrated Master Plan and Integrated Master Schedule, risk adjusting the IMS, and measuring progress to plan in units of measure meaningful to the decision makers.
Recent College of Performance Management Webinar on using Technical Performance to inform Earned Value Management. Six steps to building a credible Performance Measurement Baseline to connect the dots between all the elements of the program
EAI-748-C asks us to objectively assess accomplishments at the work performance level. As well §3.8 of 748-C tells us Earned Value is a direct measurement of the quantity of work accomplished. The quality and technical content of work is controlled by other processes. To provide visibility to integrated cost, schedule, and technical performance, we need more than CPI and SPI. We need measures of increasing technical performance.
Earned Value Management involves more that just cost and schedule. Six Business Systems, including EVM, are the basis of credible program performance management.
Here's a suggestion of how to "connect the dots."
Building a Credible Performance Measurement BaselineGlen Alleman
Establishing a credible Performance Measurement Baseline, with a risk adjusted Integrated Master Plan and Integrated Master Schedule, starts with the WBS and connects Technical Measures of progress to Earned Value
Forecasting cost and schedule performanceGlen Alleman
For credible decisions to be made, we need confidence intervals on all the numbers we use to make decisions.
These confidence intervals come from the underlying statistics and the related probabilities.
Statistical forecasting, using time series analysis of past performance, is mandatory for any credible discussion of project performance in the future.
Building a Credible Performance Measurement BaselineGlen Alleman
Establishing a credible Performance Measurement Baseline, with a risk adjusted Integrated Master Plan and Integrated Master Schedule, starts with the WBS and connects Technical Measures of progress to Earned Value
Building a risk tolerant integrated master scheduleGlen Alleman
Traditional approaches to planning, scheduling, and managing technical performance are not adequate to defend against these disruptions. This paper outlines the six steps for building a risk-tolerant schedule, using a field-proven approach.
EVM is more than people, processes, and tools. It's an integrated service that provides actionable information to the decision makers. Webinar, EcoSys, October 22, 2014.
Earned Value Management is more than people, processes, and tools. It is an integral service that provides actionable information to the decisions makers
Probabilistic Schedule and Cost AnalysisGlen Alleman
An overview of the probabilistic risk analysis processes that can be applied to a program. Although it may not appear to be a “simple” overview, this material is the tip of the iceberg of this complex topic.
Just schedule analysis has been addressed in detail here. The cost aspects of forecasting and simulation must be addressed as well to complete the connections between schedule and cost.
Probabilistic cost will be surveyed here, but an in depth review is for a later time.
Building a Credible Performance Measurement Baseline (PM Journal)Glen Alleman
Establishing a credible Performance Measurement Baseline, with a risk adjusted Integrated Master Plan and Integrated Master Schedule, starts with the WBS and connects Technical Measures of progress to Earned Value
Notes on IT programmatic risk in 5 not so easy piecesGlen Alleman
Risk management in the IT business is similar to risk management most domains. Here's a starting point for understanding the steps needed to manage risk
Similar to Integrating cost, schedule, and technical performance (20)
Planning projects usually starts with tasks and milestones. The planner gathers this information from the participants – customers, engineers, subject matter experts. This information is usually arranged in the form of activities and milestones. PMBOK defines “project time management” in this manner. The activities are then sequenced according to the projects needs and mandatory dependencies.
Increasing the Probability of Project SuccessGlen Alleman
Risk Management is essential for development and production programs. Information about key cost, performance and schedule attributes are often uncertain or unknown until late in the program.
Risk issues that can be identified early in the program, which may potentially impact the program, termed Known Unknowns, can be alleviated with good risk management. -- Effective Risk Management 2nd Edition, Page 1, Edmund Conrow, American Institute of Aeronautics and Astronautics, 2003
From Principles to Strategies for Systems EngineeringGlen Alleman
From Principles to Strategies How to apply Principles, Practices, and Processes of Systems Engineering to solve complex technical, operational,
and organizational problems
Starting with the development of a Rough Order of Magnitude (ROM) estimate of work and duration, creating the Product Roadmap and Release Plan, the Product and Sprint Backlogs, executing and statusing the Sprint, and informing the Earned Value Management Systems, using Physical Percent Complete of progress to plan.
Program Management Office Lean Software Development and Six SigmaGlen Alleman
Successfully combining a PMO, Agile, and Lean / 6 starts with understanding what benefit each paradigm brings to the table. Architecting a solution for the enterprise requires assembling a “Systems” with processes, people, and principles – all sharing the goal of business improvement.
This resource document describes the Program Governance Road map for product development, deployment, and sustainment of products and services in compliance with CMS guidance, ITIL IT management, CMMI best practices, and other guidance to assure high quality software is deployed for sustained operational success in mission critical domains.
Seven Habits of a Highly Effective agile project managerGlen Alleman
Recent neurological studies indicate that the role of emotion in human cognition is essential; emotions are not a luxury. Instead, emotions play a critical role in rational decision–making, in perception, in human interaction, and in human intelligence. Habits are the intersection of knowledge, skill, and desire.
The 5 Immutable principles of project managementGlen Alleman
Software development methods are sometimes confused with Project Management principles. There are 5 irreducible principles used to manage projects, no matter the domain or context. We need to assure our development work is guided by these 5 Project Management principles.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
5. Connecting the Dots Between the Elements
of the Performance Measurement Baseline
5
Risk
SOW
Cost
WBS
Schedule
TPM
PMB
6. There are two types of Uncertainty
Uncertainty about the
functional and performance
aspects of the program’s
technology that impacts the
produceability of the product
or creates delays in the
schedule
Uncertainty about the
duration and cost of the
activities that deliver the
functional and performance
elements of the program
independent of the technical
risk
6
Technical Programmatic
7. Risk Assessment and Management
Techniques Vary with Maturity†
7
Add a Risk Factor or Percentage to the critical paths
A “bottom line” Monte Carlo or Range analysis
Detailed Monte Carlo for each WBS element
Expert Opinions in a Database with assessment
Detailed Bayesian Network Analysis
Increasing Detail and Difficulty
IncreasingPrecisionandValue
¨ There are several approaches to building a Risk Tolerant
Performance Measurement Baseline
¤ First recognize where you are on the curve
¤ Then recognize there is value in moving further up the curve
† Ron Coleman Litton TASC, 33rd
ADoDDCAS, Williamsburg, VA
8. Risk is Different from Uncertainty
Knowing this Difference is Critical to Success
¨ Cost estimating methodology
risk resulting from improper
models of cost
¨ Cost factors such as inflation,
labor rates, labor rate burdens,
etc
¨ Configuration risk (variation in
the technical inputs)
¨ Schedule and technical risk
coupling
¨ Correlation between risk
distributions
¨ Requirements change impacts
¨ Budget Perturbations
¨ Re–work, and re–test
phenomena
¨ Contractual arrangements
(contract type, prime/sub
relationships, etc)
¨ Potential for disaster (labor
troubles, shuttle loss, satellite
“falls over”, war, hurricanes,
etc.)
¨ Probability that if a discrete
event occurs it will invoke a
project delay
8
Risk stems from known probability
distributions
Uncertainty stems from unknown
probability distributions
9. Schedule Risk Management …
¨ Seeks to anticipate and address uncertainties that threaten the
goals and timetables of a project
¨ Recognizes unmitigated risks lead rapidly to delays in delivery
dates and budget overages that undermine confidence in the
schedule and in the project manager
¨ Is process oriented, guided by DOE G 413.3-7
¨ Accepts a certain level of risk, regular and rigorous risk
analysis and risk management techniques serve to defuse
problems before they arise
¨ Defines an Integrated Master Plan that reflects the
development phases and the hierarchical structure of the
system.
: Risk Based Planning
10. A sample Risk Management System at
Johnson Space Flight Center
10
11. Connecting the Dots, Again
11
Risk
SOW
Cost
WBS
Schedule
TPM
PMB
Named
Deliverables
defined in the WBS
BCWS at the Work
Package, rolled to the
Control Account
TPMs attached to each
critical deliverables in the
WBS and identified in
each Work Package in the
IMS, used to assess
maturity in the IMP
The Products and
Processes that produce
them in a “well structured”
decomposition in the WBS
Schedule contains
all the Work
Packages, BCWS,
Risk mitigation
plans, and rolls to
the Integrated
Master Plan to
measure
increasing maturity
Technical and Programmatic
Risks Connected to the WBS
and IMS
14. Cost does not have a linear relationship with
schedule.
Basic Principles of Probabilistic Cost14
15. Keys to Cost Estimating Success
15
¨ Start with guidance on
cost estimating.
¨ Tailor the guidance to fit
the problem domain.
¨ Verify the processes work
and add value.
¨ Improve the fidelity of
estimates with feedback.
¨ Adjust estimating
parameters to match
actuals.
16. Basic Principles with Probabilistic Cost
Estimating Relationships (CER)
16
¨ Cost estimates involve many CERs
¤ Each of these CERs has uncertainty (standard error)
¤ CER input variables have uncertainty (technical uncertainty)
¨ Combine CER uncertainty with technical uncertainty for many
CERs in an estimate
¤ Usually cannot be done arithmetically; must use simulation to roll
up costs derived from Monte Carlo samples
n Add and multiply probability distributions rather than numbers
n Statistically combining many uncertain, or randomly varying, numbers
¤ Monte Carlo simulation
n Take random sample from each CER and input parameter, add and
multiply as necessary, then record total system cost as a single sample
n Repeat the procedure thousands of times to develop a frequency
histogram of the total system cost samples
n This becomes the probability distribution of total system cost
17. The Cost Probability Distributions as a
function of the weighted cost drivers
17
$
Cost Driver (Weight)
Cost = a + bXc
Cost
Estimate
Historical data point
Cost estimating relationship
Standard percent error boundsTechnical Uncertainty
Combined Cost Modeling
and Technical Uncertainty
Cost Modeling Uncertainty
18. Basic Principles of connecting cost
models with the IMS involve three steps
18
¨ Step 1: Define “likely–to–be” program
¤ Using deterministic inputs from the Independent Technical
Assessment (ITA)
¨ Step 2: Quantify the probability distributions describing the
modeling uncertainty of all CERs, cost factors, and other
estimating methods
¤ Specifically, the type of distribution (normal, triangular,
lognormal, beta, etc.)
¤ The mean and variance of the distribution
¨ Step 3: Quantify the correlation between all WBS elements
that are estimated using CERs and other methods
¤ If unknown, assess whether No correlation, Mild correlation, or
High correlation, for example:
n None: r = 0, Mild: r = ±0.2, High: r = ± 0.6
¤ Correlation affects the overall cost variance
19. Basic Principles
19
¨ Step 4: Set up and run the cost estimate in a Monte Carlo
framework (e.g., Crystal Ball, @RISK), resulting in a
“baseline” estimate
¤ This will provide a probability distribution of the cost based on
cost estimating model uncertainty only
¤ Report the MEAN as the baseline expected cost
¨ Step 5: Now incorporate technical uncertainty and discrete
risks
¤ Step 5a: Set up a new estimate which also contains any “discrete
risk” events that are to be guarded against
n Quantify appropriate modeling uncertainties and correlations, as in
Steps 2 and 3, for these discrete risks
¤ Step 5b: Define the probability distributions for all CER input
variables
n Also may need to quantify correlation between CER input variables
20. Basic Principles of connecting cost
models with the IMS involve three steps
20
¨ Step 6: Re–run the Monte Carlo simulation with random CER
input variables and discrete risk events, resulting in a final
“risk–adjusted” estimate
¤ Results in a new risk–adjusted cost probability distribution.
¤ Wider and shifted to the right Baseline vs. Risk-Adjusted Estimates
0 50 100 150 200 250 300 350
FY$M
Likelihood
21. Baseline versus Risk Adjusted Cost Estimates
Almost Always Shows an Increase In Cost
21
Baseline vs. Risk-Adjusted Estimates
0 50 100 150 200 250 300 350
FY$M
Likelihood
22. S-Curve for Cost Modeling
22
Cumulative Distribution Function
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
$60 $80 $100 $120 $140 $160 $180 $200
FY00$M
CumulativeProbability