The document discusses using probabilistic risk analysis and Monte Carlo simulation to increase the probability of project success. It explains that modeling tasks as probability distributions rather than single point estimates allows for a more accurate assessment of overall schedule and budget risk. Capturing the uncertainty and dependencies between different tasks and cost/schedule drivers is important for generating reliable forecasts. The goal is to quantify confidence levels and establish appropriate margins to account for risks and uncertainties.
Introduction to monte-carlo analysis for software development - Troy Magennis...Troy Magennis
Forecasting and managing software development project risks & uncertainty. Monte-carlo analysis is the tool of choice for managing risk in many fields where risk is an inherent part of doing business. This paper examines how to use monte-carlo techniques to understand and leverage risk in Software Development projects and teams.
Earned Value Management Meets Big DataGlen Alleman
The Earned Value Management System (EVMS) maintains period–by–period data in its underlying databases. The contents of the Earned Value repository can be considered BIG DATA, characterized by three attributes – 1) Volume: Large amounts of data; 2) Variety: data comes from different sources, including traditional data bases, documents, and complex records; 3) Velocity: the content is continually being updated by absorbing other data collections, through previously archived data, and through streamed data from external sources.
With this time series information in the repository, analysis of trends, cost and schedule forecasts, and confidence levels of these performance estimates can be calculated using statistical analysis techniques enabled by the Autoregressive Integrated Moving Average (ARIMA) algorithm provided by the R programming system. ARIMA provides a statistically informed Estimate At Completion (EAC) and Estimate to Complete (ETC) to the program in ways not available using standard EVM calculations. Using ARIMA reveals underlying trends not available through standard EVM reporting calculations.
With ARIMA in place and additional data from risk, technical performance and the Work Breakdown Structure, Principal Component Analysis can be used to identify the drivers of unanticipated EAC.
When contractually required, DOD acquisition contractors are obligated to submit IPMR's electronically IAW DID 81861. This data is necessary but not sufficient for successfully managing a program. This presentation is the overview of the Essential Views needed for that success
Monte Carlo Simulation for Agile DevelopmentGlen Alleman
Managing in the presence of uncertainty requires making decisions with models of that uncertainty. Monte Carlo Simulation and related approaches are the basis of making informed decisions in the presence of uncertainty
Probabilistic Schedule and Cost AnalysisGlen Alleman
An overview of the probabilistic risk analysis processes that can be applied to a program. Although it may not appear to be a “simple” overview, this material is the tip of the iceberg of this complex topic.
Just schedule analysis has been addressed in detail here. The cost aspects of forecasting and simulation must be addressed as well to complete the connections between schedule and cost.
Probabilistic cost will be surveyed here, but an in depth review is for a later time.
Introduction to monte-carlo analysis for software development - Troy Magennis...Troy Magennis
Forecasting and managing software development project risks & uncertainty. Monte-carlo analysis is the tool of choice for managing risk in many fields where risk is an inherent part of doing business. This paper examines how to use monte-carlo techniques to understand and leverage risk in Software Development projects and teams.
Earned Value Management Meets Big DataGlen Alleman
The Earned Value Management System (EVMS) maintains period–by–period data in its underlying databases. The contents of the Earned Value repository can be considered BIG DATA, characterized by three attributes – 1) Volume: Large amounts of data; 2) Variety: data comes from different sources, including traditional data bases, documents, and complex records; 3) Velocity: the content is continually being updated by absorbing other data collections, through previously archived data, and through streamed data from external sources.
With this time series information in the repository, analysis of trends, cost and schedule forecasts, and confidence levels of these performance estimates can be calculated using statistical analysis techniques enabled by the Autoregressive Integrated Moving Average (ARIMA) algorithm provided by the R programming system. ARIMA provides a statistically informed Estimate At Completion (EAC) and Estimate to Complete (ETC) to the program in ways not available using standard EVM calculations. Using ARIMA reveals underlying trends not available through standard EVM reporting calculations.
With ARIMA in place and additional data from risk, technical performance and the Work Breakdown Structure, Principal Component Analysis can be used to identify the drivers of unanticipated EAC.
When contractually required, DOD acquisition contractors are obligated to submit IPMR's electronically IAW DID 81861. This data is necessary but not sufficient for successfully managing a program. This presentation is the overview of the Essential Views needed for that success
Monte Carlo Simulation for Agile DevelopmentGlen Alleman
Managing in the presence of uncertainty requires making decisions with models of that uncertainty. Monte Carlo Simulation and related approaches are the basis of making informed decisions in the presence of uncertainty
Probabilistic Schedule and Cost AnalysisGlen Alleman
An overview of the probabilistic risk analysis processes that can be applied to a program. Although it may not appear to be a “simple” overview, this material is the tip of the iceberg of this complex topic.
Just schedule analysis has been addressed in detail here. The cost aspects of forecasting and simulation must be addressed as well to complete the connections between schedule and cost.
Probabilistic cost will be surveyed here, but an in depth review is for a later time.
Managing in the presence of uncertaintyGlen Alleman
Uncertainty is the source of risk. Uncertainty comes in two types, aleatory and epistemic. It is important to understand both and deal with both in distinct ways, in order to produce a credible risk handling strategy.
Making Agile Development work in Government ContractingGlen Alleman
Before any of the current “agile” development
methods, Earned Value Management provided information
for planning and controlling complex projects by
measuring how much “value” was produced for a given
cost in a period of time. One shortcoming of an agile
development method is its inability to forecast the future
cost and schedule of the project beyond the use of “yesterdays
weather” metrics. These agile methods assume
the delivered value, “velocity” in the case of XP, is compared
with the estimated value – this is a simple comparison
between budget and actual cost resulting in a Cost
Variance.
From WBS to Integrated Master ScheduleGlen Alleman
A step by step guide to increasing the Probability of Program success starting with the WBS, developing the Integrated Master Plan and Integrated Master Schedule, risk adjusting the IMS, and measuring progress to plan in units of measure meaningful to the decision makers.
Increasing the probability of program successGlen Alleman
Program Success starts and end with Process. Along the way, people and tools are needed, but process is the foundation of program success. These processes start with the Concept of Operations, describing what Capabilities are needed by the stakeholder to accomplish the mission of the program. Assessment of progress to plan must be made in units of measure meaningful to the decision makers. Measures of Effectiveness are defined by the Government. Measures of Performance and Technical Performance Measures are defined by Industry.
Cost and schedule growth for federal programs is created by unrealistic technical performance expectations, unrealistic cost and schedule estimates, inadequate risk assessments, unanticipated technical issues, and poorly performed and ineffective risk management, all contributing to program technical and programmatic shortfalls
Forecasting cost and schedule performanceGlen Alleman
For credible decisions to be made, we need confidence intervals on all the numbers we use to make decisions.
These confidence intervals come from the underlying statistics and the related probabilities.
Statistical forecasting, using time series analysis of past performance, is mandatory for any credible discussion of project performance in the future.
This briefing is an overview of the probabilistic risk analysis processes that can be applied to our program. Although it may not appear to be a “simple” overview, this material is the tip of the iceberg of this complex topic.
Just schedule analysis has been addressed in detail here. The cost aspects of forecasting and simulation must be addressed as well to complete the connections between schedule and cost.
Probabilistic cost will be surveyed here, but an in depth review is for a later time.
Risk Management is essential for the success of any significant project. Information about key project cost, performance, and schedule attributes is often unknown until the project is underway.
Establishing schedule margin using monte carlo simulation Glen Alleman
The first order goal is to develop a resource loaded, risk tolerant, Integrated Master Schedule, derived from the Integrated Master Plan that clearly shows the increasing maturity of the program's deliverables, through vertical and horizontal traceability to the program's requirements.
IS EARNED VALUE + AGILE A MATCH MADE IN HEAVEN?
Increasing the Probability of Program Success requires by connecting the dots between EV and Agile Development.
Presented at
The Nexus of Agile Software Development and
Earned Value Management, OSD-PARCA,
February 19 – 20, 2015
Institute for Defense Analysis, Alexandria, VA
Managing Deploymemt of ERP Systems in the Publishing DomainGlen Alleman
Managing the outcome of an ERP deployment is difficult at best. There are many obstacles to success, the least of which is the basic understanding that accepting an ERP system into a business is a significant disruptive event. This document describes the processes and activities involved in deploying ERP. The contributions of a consulting firm can significantly add to the
probability of success. In the newspaper business domain, the successful deployment of an ERP system not only impacts the back office and financial operations, but also the editorial,
advertising and press operations. ERP is a mission critical function of any modern newspaper and must be treated as such.
The role of Risk Assessment and Risk Management is to continuously Identify, Analyze, Plan, Track, Control, and Communicate the risks associated with a project.
The Webster’s definition of risk is the possibility of suffering a loss. Risk in itself is not bad. Risk is essential to progress and failure is often a key part of learning. Managing risk is a key part of
success.
This document describes the foundations for conducting a risk assessment of a large-scale system
development project. Such a project will likely include the procurement of Commercial Off The
Shelf (COTS) products as well as their integration with legacy systems.
Niwot Ridge
Information Technology Risk ManagementGlen Alleman
The concept of managing the development or deployment of an Information Technology (IT) system using deterministic, linear, and causal analysis contains several pitfalls. As IT systems grow in complexity, the interaction between their components becomes non–linear and indeterminate, creating many opportunities for failure.
Delivering programs with less capability than promised, while exceeding the cost and planned durations, distorts decision making, contributes to increasing cost growth to other programs, undermines the Federal government’s credibility with taxpayers and contributes to the public’s negative support for these programs.
Performance-Based Project Management® id s deliverables based approach to project success. Deliverables start with the needed capabilities that the project produces to meet the mission objectives or fulfill a business case.
These deliverables fulfill the requirements, assessed through Measures of Effectiveness and Measures of Performance
Managing in the presence of uncertaintyGlen Alleman
Uncertainty is the source of risk. Uncertainty comes in two types, aleatory and epistemic. It is important to understand both and deal with both in distinct ways, in order to produce a credible risk handling strategy.
Making Agile Development work in Government ContractingGlen Alleman
Before any of the current “agile” development
methods, Earned Value Management provided information
for planning and controlling complex projects by
measuring how much “value” was produced for a given
cost in a period of time. One shortcoming of an agile
development method is its inability to forecast the future
cost and schedule of the project beyond the use of “yesterdays
weather” metrics. These agile methods assume
the delivered value, “velocity” in the case of XP, is compared
with the estimated value – this is a simple comparison
between budget and actual cost resulting in a Cost
Variance.
From WBS to Integrated Master ScheduleGlen Alleman
A step by step guide to increasing the Probability of Program success starting with the WBS, developing the Integrated Master Plan and Integrated Master Schedule, risk adjusting the IMS, and measuring progress to plan in units of measure meaningful to the decision makers.
Increasing the probability of program successGlen Alleman
Program Success starts and end with Process. Along the way, people and tools are needed, but process is the foundation of program success. These processes start with the Concept of Operations, describing what Capabilities are needed by the stakeholder to accomplish the mission of the program. Assessment of progress to plan must be made in units of measure meaningful to the decision makers. Measures of Effectiveness are defined by the Government. Measures of Performance and Technical Performance Measures are defined by Industry.
Cost and schedule growth for federal programs is created by unrealistic technical performance expectations, unrealistic cost and schedule estimates, inadequate risk assessments, unanticipated technical issues, and poorly performed and ineffective risk management, all contributing to program technical and programmatic shortfalls
Forecasting cost and schedule performanceGlen Alleman
For credible decisions to be made, we need confidence intervals on all the numbers we use to make decisions.
These confidence intervals come from the underlying statistics and the related probabilities.
Statistical forecasting, using time series analysis of past performance, is mandatory for any credible discussion of project performance in the future.
This briefing is an overview of the probabilistic risk analysis processes that can be applied to our program. Although it may not appear to be a “simple” overview, this material is the tip of the iceberg of this complex topic.
Just schedule analysis has been addressed in detail here. The cost aspects of forecasting and simulation must be addressed as well to complete the connections between schedule and cost.
Probabilistic cost will be surveyed here, but an in depth review is for a later time.
Risk Management is essential for the success of any significant project. Information about key project cost, performance, and schedule attributes is often unknown until the project is underway.
Establishing schedule margin using monte carlo simulation Glen Alleman
The first order goal is to develop a resource loaded, risk tolerant, Integrated Master Schedule, derived from the Integrated Master Plan that clearly shows the increasing maturity of the program's deliverables, through vertical and horizontal traceability to the program's requirements.
IS EARNED VALUE + AGILE A MATCH MADE IN HEAVEN?
Increasing the Probability of Program Success requires by connecting the dots between EV and Agile Development.
Presented at
The Nexus of Agile Software Development and
Earned Value Management, OSD-PARCA,
February 19 – 20, 2015
Institute for Defense Analysis, Alexandria, VA
Managing Deploymemt of ERP Systems in the Publishing DomainGlen Alleman
Managing the outcome of an ERP deployment is difficult at best. There are many obstacles to success, the least of which is the basic understanding that accepting an ERP system into a business is a significant disruptive event. This document describes the processes and activities involved in deploying ERP. The contributions of a consulting firm can significantly add to the
probability of success. In the newspaper business domain, the successful deployment of an ERP system not only impacts the back office and financial operations, but also the editorial,
advertising and press operations. ERP is a mission critical function of any modern newspaper and must be treated as such.
The role of Risk Assessment and Risk Management is to continuously Identify, Analyze, Plan, Track, Control, and Communicate the risks associated with a project.
The Webster’s definition of risk is the possibility of suffering a loss. Risk in itself is not bad. Risk is essential to progress and failure is often a key part of learning. Managing risk is a key part of
success.
This document describes the foundations for conducting a risk assessment of a large-scale system
development project. Such a project will likely include the procurement of Commercial Off The
Shelf (COTS) products as well as their integration with legacy systems.
Niwot Ridge
Information Technology Risk ManagementGlen Alleman
The concept of managing the development or deployment of an Information Technology (IT) system using deterministic, linear, and causal analysis contains several pitfalls. As IT systems grow in complexity, the interaction between their components becomes non–linear and indeterminate, creating many opportunities for failure.
Delivering programs with less capability than promised, while exceeding the cost and planned durations, distorts decision making, contributes to increasing cost growth to other programs, undermines the Federal government’s credibility with taxpayers and contributes to the public’s negative support for these programs.
Performance-Based Project Management® id s deliverables based approach to project success. Deliverables start with the needed capabilities that the project produces to meet the mission objectives or fulfill a business case.
These deliverables fulfill the requirements, assessed through Measures of Effectiveness and Measures of Performance
Start with defining the deliverables to produce the capabilities needed for project success. Then what work is needed, the order of that work, and the defined outcomes of that work become obvious. Sequence that work, assign durations and resources and you've generated the plans and schedule for success
The integrated master plan and integrated master scheduleGlen Alleman
The Integrated Master Plan (IMP) and Integrated Master Schedule( (IMS) provide a strategy for the incremental delivery of program outcomes through increasing maturity assessments with Measures of Effectiveness, Measures of Performance, Technical Performance Measures, and Key Performance Parameters.
These assessment assure the needed capabilities of the project are met at each assessment point to confirm physical percent complete as planned in the Integrated Master Plan
Project driven organization require lifecycle management to successfully deliver value to those paying for the outcomes of the project effort. This involves processes and data for Executive processes, Enterprise Governance, Program Management Office activities, Applications that enable the delivery of value, and overarching processes and data.
The management of software development is fraught with risk: technical risk, market risk, requirements risk, and financial risk. This paper describes nine (9) key management principles for
guiding the development of a software project. These principles are not original. They are taken directly from the work of Norm Brown, the founder and executive Director of the Software Program Managers Network (SPMN).
Risk management is essential for any significant project. Certain information about key project cost, performance, and schedule attributes are often unknown until the project is underway.
Risk management is essential for the success of any significant project. Information about key project cost, performance, and schedule attributes is often unknown until the project is underway. Risks that can be identified early in the project that impacts the project later are often termed “known unknowns.” These risks can be mitigated, reduced, or retired with a comprehensive risk management process. For risks that are beyond the vision of the project team a properly implemented risk management process can be used to rapidly quantify the risks impact and provide sound plans for mitigating its affect.
Risk management is essential for the success of any significant project. Information about key project cost, performance, and schedule attributes is often unknown until the project is underway.
How Traditional Risk Reporting Has Let Us DownAcumen
This white paper discusses risk reporting techniques and ways of interpreting risk analysis results that actually enable the project team to make pro-active changes in reducing their risk exposure.
The notion that risk can be programmed out of a project schedule is a false hope. However, you manage uncertainties by understanding the risk types they represent and addressing each in an appropriate manner
In this webinar, we’ll discuss the benefits to establishing a meaningful dollar value, and we’ll show examples of hourly cost reports within a GRC solution.
By adding this business context, you’ll add more consistency, transparency and meaning to your BCM program, allowing you to make more effective decisions and investments.
Measuring Risk Exposure through Risk Range CertaintyAcumen
A white paper on how to overcome the challenges of project risk exposure reporting. Introducing a new, more meaningful risk metric, Risk Range Certainty (RRC).
Rethinking Risk-Based Project Management in the Emerging IT initiatives.pptxInflectra
The pressure to deliver faster to the market has never been more insistent and pervasive than today’s business environment. The Agile world of iterative and incremental delivery has enabled great advances in terms of delivery speed; however, the lack of an integrated risk framework is creating challenges in terms of matching speed with quality. On the one hand, the standards-setting organizations such as the Project Management Institute (PMI) have updated their book of knowledge (PMBOK v7) to move away from highly prescriptive processes to lean thinking. On the other hand, Agile standards themselves have started to emerge, recognizing the need for some prescriptive guidelines on coming up with release and iteration goals. Struggling in between this continuum are the innovative technology projects that wonder how “creativity can be timeboxed” to deliver value!
While the impact of leadership to form the team and the organizational culture to embrace continuous learning are unquestionable, it is important to realize that the areas of strategy, leadership, and culture are not substitutes for the lack of risk-based project thinking. When delivering IT applications that are contain inherent conceptual, technical, and compliance risks, a more systematic approach is needed. In this presentation, you will hear about the emerging space of IT initiatives that are impacted by such risks and the need to adopt risk-based frameworks in application lifecycle management. You will also see practical examples of how risk-based lifecycle management can be done in real-time.
This presentation talks about how risks in a project are analyzed and quantified. The presentation also discusses benefits of quantification of risks and the various tools at our disposal to manage risks effectively through quantification.
Planning projects usually starts with tasks and milestones. The planner gathers this information from the participants – customers, engineers, subject matter experts. This information is usually arranged in the form of activities and milestones. PMBOK defines “project time management” in this manner. The activities are then sequenced according to the projects needs and mandatory dependencies.
Increasing the Probability of Project SuccessGlen Alleman
Risk Management is essential for development and production programs. Information about key cost, performance and schedule attributes are often uncertain or unknown until late in the program.
Risk issues that can be identified early in the program, which may potentially impact the program, termed Known Unknowns, can be alleviated with good risk management. -- Effective Risk Management 2nd Edition, Page 1, Edmund Conrow, American Institute of Aeronautics and Astronautics, 2003
Cost and schedule growth for complex projects is created when unrealistic technical performance expectations, unrealistic cost and schedule estimates, inadequate risk assessments, unanticipated technical issues, and poorly performed and ineffective risk management, contribute to project technical and programmatic shortfalls
From Principles to Strategies for Systems EngineeringGlen Alleman
From Principles to Strategies How to apply Principles, Practices, and Processes of Systems Engineering to solve complex technical, operational,
and organizational problems
Building a Credible Performance Measurement BaselineGlen Alleman
Establishing a credible Performance Measurement Baseline, with a risk adjusted Integrated Master Plan and Integrated Master Schedule, starts with the WBS and connects Technical Measures of progress to Earned Value
Capabilities‒Based Planning the capabilities needed to accomplish a mission or fulfill a business strategy
Only when capabilities are defined can we start with requirements elicitation
Starting with the development of a Rough Order of Magnitude (ROM) estimate of work and duration, creating the Product Roadmap and Release Plan, the Product and Sprint Backlogs, executing and statusing the Sprint, and informing the Earned Value Management Systems, using Physical Percent Complete of progress to plan.
Program Management Office Lean Software Development and Six SigmaGlen Alleman
Successfully combining a PMO, Agile, and Lean / 6 starts with understanding what benefit each paradigm brings to the table. Architecting a solution for the enterprise requires assembling a “Systems” with processes, people, and principles – all sharing the goal of business improvement.
This resource document describes the Program Governance Road map for product development, deployment, and sustainment of products and services in compliance with CMS guidance, ITIL IT management, CMMI best practices, and other guidance to assure high quality software is deployed for sustained operational success in mission critical domains.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Connector Corner: Automate dynamic content and events by pushing a button
Risk management using risk+ (v5)
1. 1
INCREASING THE
PROBABILITY OF
PROGRAM SUCCESS
USING RISK+
Glen B. Alleman A workshop on the principles and practices of Risk+ and
Niwot Ridge LLC increasing the Probability of Program Success
2. A Warning
We’re going to cover a lot of material in 3 hours
2
4. 4 Douglas Adams, Hitchhiker's Guide to the Galaxy
5. MOTIVATION?
Your motivation?
Your motivation is your pay packet on Friday.
Now get on with it.
– Noel Coward, English actor, dramatist, &
songwriter (1899 – 1973)
5
6. We have to know the underlying statistical behavior of the
processes driving the project
This means cost, schedule, and technical performance
measures with probabilistic models
We need to know how these three statistical drivers are
coupled
What drives what?
What are the multipliers between each random
6
variable?
9. The IMS is a collection of probabilistic
processes all coupled together
9
10. What does this really mean?
10
In building a risk tolerant IMS,
we’re interested in the
probability of a successful
outcome…
“What is the probability of
making a desired completion
date?”
But the underlying statistics of
the tasks influence this
probability
The statistics of the tasks, their arrangement in a network of
tasks and correlation define how this probability based
estimated developed.
11. There are real problems with those pesky
Unknowns that get in the way of progress
Imprint of a bird on our west facing family room second story
window on a bright afternoon
11 The Bird survived
12. The “units of measure” of Risk
12
These classifications can be used to avoid asking the
“3 point” question for each task.
Anchoring and Adjustment† of all estimating
processes produces a bias.
Knowing this is necessary for credible estimates.
Classification Uncertainty Overrun
1 Routine, been done before Low 0% to 2%
2 Routine, but possible difficulties Medium to Low 2% to 5%
3 Development, with little technical difficulty Medium 5% to 10%
4 Development, but some technical difficulty Medium High 10% to 15%
5 Significant effort, technical challenge High 15% to 25%
6 No experience in this area Very High 25% to 50%
† Tversky and Khanemann Anchoring and Adjustment
13. We’re looking for knowledge of what is going to
happen in he future, with a known level of confidence
13
Harvard main library
15. 15 What is Monte Carlo Simulation?
With some principals behind us, let’s see how to use
Risk+ to address the problem of forecasting the future of
schedule and cost performance.
16. A Quick Look At Monte Carlo
16
George Louis Leclerc,
Comte de Buffon, asked
what was the probability
that the needle would fall
across one of the lines,
marked here in green.
That outcome will occur
only if A l sin
18. Monte Carlo Simulation
18
Monte Carlo Simulation is named
after the city, in Monaco, of casinos
on the French Rivera.
Monte Carlo …
Examines all paths not just the critical
path.
Provides an accurate (true) estimate
of completion:
Overall duration distribution
Confidence interval (accuracy
range)
Sensitivity analysis of interacting tasks
Varied activity distribution types – not restricted to a single distribution
Schedule logic can include branching – both probabilistic and conditional
When resource loaded schedules are used – provides integrated cost and
schedule probabilistic model.
20. 20 What Are We Really After?
We need to answer the question …
What is the confidence we will complete “on or
before” at date and “at or below” at cost?
This is the question that should be asked and
answered on a periodic basis.
We need to have Schedule and Cost margin to
protect the deliverables and our Budget At
Completion.
21. Here is some advice on how
to depict this margin and
where to place this margin.
No matter how we show
manage these two elements
in the IMS, if we don’t have
margin we are late and
over budget before we
start.
http://www.ndia.org/Divisions/Divisions/Procurement/Documents/PMSCommittee/CommitteeDocuments/
21
WhitePapers/NDIAScheduleMarginWhitePaperFinal-2010(2).pdf
22. Confidence levels for margin change
as the program proceeds
22
As the program
proceeds we
want to have
Increased
accuracy
Reduced
schedule risk
Increasing
visual
confirmation Current Estimate Confidence
that success
can be
reached
23. Our REAL goal here is to Manage
Margin using probabilistic models
23
Programmatic Margin is Margin that is not used in the
added between Development, IMS for risk mitigation will be
Production and Integration & moved to the next sequence
Test phases of risk alternatives
Risk Margin is added to the This enables us to buy back
IMS where risk alternatives schedule margin for activities
are identified further downstream
This enables us to control the
Downstream
ripple effect of schedule shifts
Plan B
Duration of Plan B < Plan A + Margin Activities shifted to
left 2 days
on Margin activities
Plan B
3 Days Margin Used
Plan A
5 Days Margin
First Identified Risk Alternative in IMS Plan A 5 Days Margin
Second Identified Risk 2 days will be added
to this margin task
Alternative in IMS to bring schedule
back on track
24. Sensitivity Analysis
24
The schedule sensitivity of a task measures the
closeness with which change in the task duration
matches change in the project duration over the
simulation.
This closeness is the correlation between changes in
individual activities and their impacts on other
activities.
A task with high schedule sensitivity is more likely to
be a major driver of the project duration than a
lower ranked task.
: Models of the Schedule
25. Task Criticality Analysis
25
A measure of the frequency that an activity in the
project schedule is critical (Total Float = 0) in a
simulation
If a task is critical in 500 of the 1,000 iterations of
the simulation, it has a Criticality Index of 0.5
The higher the criticality index, the more certain it is
that the task will always be critical in the project
: Models of the Schedule
26. Cruciality shows each task’s tolerance
to risk
26
Cruciality = Schedule Sensitivity x Criticality
Schedule Sensitivity can be statistically misleading:
A task with high sensitivity may not be on or near the
critical path.
Thus a reduction in that task’s duration may have little
effect on the project duration.
Cruciality sharpens the analytical focus:
It highlights critical or near–critical activities with high.
Schedule Sensitivity
These tasks are most likely to drive project duration.
: Models of the Schedule
27. Guiding the Risk Factor Process means
weighting each level of risk
27
For tasks marked “Low” a reasonable
approach is to score the maximum
Min Most Max
10% greater than the minimum. Likely
The “Most Likely” is then scored as a
geometric progression for the Low 1.0 1.04 1.10
remaining categories with a common Low+ 1.0 1.06 1.15
ratio of 1.5
Moderate 1.0 1.09 1.24
Tasks marked “Very High” are bound Moderate+ 1.0 1.14 1.36
at 200% of minimum.
High 1.0 1.20 1.55
No viable project manager would like
a task grow to three times the planned High+ 1.0 1.30 1.85
duration without intervention Very High 1.0 1.46 2.30
The geometric progress is somewhat Very High+ 1.0 1.68 3.00
arbitrary but it should be used instead
of a linear progression
: Examples of Monte Carlo
28. Progressive Risk Factors
28
A geometric progression (1.534) of risk can be
used.
The phrases associated with increasing risk have
been shown at the Naval Research Laboratory to
correlate with an engineers “sense” of increasing
risk.
: Examples of Monte Carlo
29. Risk Factor Attributes
29
The “narrative” for each risk
factor needs to be developed.
Each description is dependent
on…
Discipline
Program stage
Complexity
Historical data
Current “risk state” of the program
This is currently missing from our
efforts to quantify schedule and
cost risk.
: Examples of Monte Carlo
30. Accuracy
30
Given a specified final cost or project duration, what is
the probability of achieving this cost or duration?
Frequentist approach
Over many different projects, four out of five will cost less
or be completed in less time than the specified cost or
duration.
Bayesian approach
We would be willing to bet at 4 to 1 odds that the project
will be under the 80% point in cost or duration.
Accuracy is needed to plan reserves.
Accuracy is needed when comparing competing
proposals.
: What is the Purpose of Project Risk Analysis?
31. Structured Thinking
31
All estimates will be in error to some degree of
variance.
Trying to quantify these errors will result in bounds too
wide to be useful for decision making.
Risk analysis should be used to
Think about different aspects of the project
Try to put numbers against probabilities and impacts
Discuss with colleagues the different ideas and perceptions
Thinking things through carefully results in
Which elements of the programmatic and technical risk are
represented in the IMS.
The process becomes more valuable than the numbers.
: What is the Purpose of Project Risk Analysis?
32. To properly use Schedule Margin†
32
Work must be represented in single units – either
task or work packages.
The overall schedule margin must be related to the
variation of individual units of work.
The importance of the units of work must be shared
among all participants (ordinal ranking of work and
its risk).
The schedule must be reasonable in some units of
measure shared by all the participants.
† “Protecting Earned Value Schedules with Schedule Margin,” Newbold, Budd, and Budd,
http://www.prochain.com/pm/articles/ProtectingEVSchedules.pdf
33. Let’s Apply a Monte Carlo Simulation Tool
The Monte Carlo trolley, or FERMIAC,
was invented by Enrico Fermi and
constructed by Percy King. The drums
on the trolley were set according to
the material being traversed and a
random choice between fast and slow
neutrons.
Another random digit was used to
determine the direction of motion,
and a third was selected to give the
distance to the next collision. The
trolley was then operated by moving
it across a two dimensional scale
drawing of the nuclear device or
reactor assembly being studied.
The trolley drew a path as it rolled,
stopping for changes in drum settings
whenever a material boundary was
crossed. This infant computer was
used for about two years to
determine, among other things, the
change in neutron population with
33 time in numerous types of nuclear
systems.
35. 35 A Small Diversion
Most Likely Isn’t Likely to be the Most Likely
When we say “most likely” what do we think this
actually means?
If you pick the wrong meaning, your Monte
Carlo model will be seriously flawed.
36. The problem with “Most Likelies”
36
For each activity the “best” estimate is …
The “most likely” duration – the mode of the distribution of
durations? (Mode is the number that appears most often)
It’s 50th percentile duration – the median of the distribution?
(Median is the number in the middle of all the numbers)
It’s expected duration – the mean of the distribution? (Mean
is the average of all the numbers)
These definitions lead to values that are almost always
different from each other.
Rolling up the “best” estimate of completion is almost
never one of these.
37. Durations are Probability Estimates not
Single Point Values
37
We know this because…
“Best” estimate is not the only possible estimate, so other
estimates must be considered “worse.”
Common use of the phrase “most likely duration” assumes
that other possible durations are “less likely.”
“Mean,” “median,” and “mode” are statistical terms
characteristic of probability distributions.
This implies activity distributions have probability
distributions
They are random variables drawn from the probability
distribution function (pdf).
“Actual” project duration is an uncertain quality that can
be modeled as a sum of random variables
The pdf may be known or unknown.
38. 3 Task Most Likely ≠ Project Most Likely
38
PERT assumes
probability
distribution of the
project times is the
same as the tasks
on the critical
path.
Because other
paths can become
critical paths, PERT
consistently
underestimates the
project completion
time. 1+1=3
: Managing Uncertainty in the IMS
39. Probability Distribution Function is the
Lifeblood of good planning
39
Probability of
occurrence as a
function of the
number of
samples.
“The number of
times a task
duration
appears in a
Monte Carlo
simulation.”
: Managing Uncertainty in the IMS
40. Remember the quote about statistics
40
Lies, Damn Lies, and Statistics
– Benjamin Disraeli
But we know better, we know that
any estimate without a variance is
not trustworthy.
We know that the variances have
to be calibrated from past
performance to be credible
42. 42 A “Real World” Schedule Analysis
One should expect that the expected
can be prevented, but the unexpected
should have been expected.
— Augustine Law XLV
This is a must own book for everyone in our business. It defines
fundamental Laws of program and business management, which
are many times ignored – like the one above
43. Our Starting Point
43
Risk+ Installed
Let’s define the
needed fields
These are used
by Risk+ to hold
information and
run the
application.
If there are conflicts, you can make changes in Risk+
to work around your fields.
44. A Simple IMS
44
By simple it means serial cascaded work efforts.
45. Initial Field Usage
45
Minimum Remaining Duration
The duration that is least you’d
expect this task to complete in
Most Likely Remaining
Duration
The ML (Mode) of the duration
Maximum Remaining Duration
The duration that is the most
you’d expect this task to
complete in
Task Reporting ID
The tasks we want to watch
46. Define a View and Table for Risk+
46
Start with the Gantt View and Entry Table
Set up both to match the Risk+ field usage
Use the default if
there are no field
conflicts
48. Let’s actually “doing something”
48
Initialize the Most Likely.
This sets the Most Likely
duration to the same value
that is in the “Duration” field
of your IMS.
The “planned duration” now
becomes the ML duration.
If this “planned duration” is
bogus then your model will be
as well.
Choose wisely.
49. Now the ML = DURATION step
49
All the DURATION
values have been
moved to the ML field.
But remember our
discussion of the ML’s
Choose them carefully
The next we’ll set the
upper and lower limits
of that ML value
Using risk factors.
OK, 3 point estimates
if you have to.
50. Let’s do this the simple way
50
Let’s pick MEDIUM
confidence.
MEDIUM means
–25%
+25%
And a NORMAL
(Gaussian) curve
51. Let’s have Risk+ do something for us
51
Enter a “1” in the RPT field (Number 1)
This marks that ROW in the schedule as a work
activity we want to see the Monte Carlo output for
52. Now we’re ready to run
52
The RISK ANALYSIS command
starts the process going.
Let’s make 200 iteration and
look at the DURARTION
ANALYSIS for the activities we
are watching.
53. This is nice but what actually is Risk +
doing?
53
Risk+ is picking a random number from under the normal
distribution within the range of the
Least remaining and most remaining
This is not some ordinary random number it is chosen through
an algorithm called the Latin Hypercube - more on that later.
Risk+ then plugs that number into the “real” DURATION
field and does that for all the DURATIONS in the
schedule
Then the F9 key is pressed and the date is recorded for
the finish of UID 41.
This is done 200 times and a histogram of all the dates
that appeared for those 200 time is recorded.
54. And We Get
54
Date: 11/29/2011 4:32:17 PM Completion Std Deviation: 2.06 days
Samples: 500 95% Confidence Interval: 0.18 days
Unique ID: 19 Each bar represents 1 day
Name: End Work Package 3
0.20 1.0 Completion Probability Table
0.18 0.9
Cumulative Probability
Prob Date Prob Date
0.16 0.8
0.05 Wed 3/7/12 0.55 Tue 3/13/12
0.14 0.7 0.10 Thu 3/8/12 0.60 Tue 3/13/12
Frequency
0.12 0.6 0.15 Thu 3/8/12 0.65 Tue 3/13/12
0.10 0.5 0.20 Fri 3/9/12 0.70 Wed 3/14/12
0.08 0.4 0.25 Fri 3/9/12 0.75 Wed 3/14/12
0.06 0.3 0.30 Fri 3/9/12 0.80 Wed 3/14/12
0.35 Mon 3/12/12 0.85 Thu 3/15/12
0.04 0.2
0.40 Mon 3/12/12 0.90 Thu 3/15/12
0.02 0.1 0.45 Mon 3/12/12 0.95 Fri 3/16/12
Fri 3/2/12 Mon 3/12/12 Tue 3/20/12
0.50 Mon 3/12/12 1.00 Tue 3/20/12
Completion Date
55. Learning to Speak in Risk+
55
Risk +shows use the probability of finish “on or
before” a date
It does NOT show the probability of success.
But even the “on or before” term is loaded with
special meaning.
It means for the 500 iterations of Risk+ using the
upper and lower bounds of the duration, drawn
from the probability density function (pdf) with the
Normal (Gaussian) shape, 60% of the finish dates
were recorded to be on or before 3/12/12.
56. Medium confidence for a large project
56
Date: 11/30/2011 6:05:35 PM Completion Std Deviation: 4.49 days
Samples: 200 95% Confidence Interval: 0.62 days
Unique ID: 17 Each bar represents 2 days
Name: (SA) Systems Requirements Completed
0.22 1.0 Completion Probability Table
0.20 0.9
Cumulative Probability
Prob Date Prob Date
0.17 0.8
0.05 Fri 5/4/12 0.55 Thu 5/17/12
0.7 0.10 Wed 5/9/12 0.60 Thu 5/17/12
Frequency
0.15
0.6 0.15 Thu 5/10/12 0.65 Fri 5/18/12
0.13
0.5 0.20 Fri 5/11/12 0.70 Mon 5/21/12
0.10 0.25 Mon 5/14/12 0.75 Mon 5/21/12
0.4
0.08
0.3 0.30 Mon 5/14/12 0.80 Tue 5/22/12
0.05 0.35 Tue 5/15/12 0.85 Wed 5/23/12
0.2
0.40 Tue 5/15/12 0.90 Thu 5/24/12
0.03 0.1 0.45 Wed 5/16/12 0.95 Mon 5/28/12
Wed 5/2/12 Wed 5/16/12 Mon 6/4/12
0.50 Wed 5/16/12 1.00 Mon 6/4/12
Completion Date
57. Low confidence for a large project
57
Date: 11/30/2011 10:30:05 PM Completion Std Deviation: 9.14 days
Samples: 200 95% Confidence Interval: 1.26 days
Unique ID: 17 Each bar represents 3 days
Name: (SA) Systems Requirements Completed
0.16 1.0 Completion Probability Table
0.9
0.14
Cumulative Probability
Prob Date Prob Date
0.8
0.12 0.05 Thu 5/3/12 0.55 Fri 5/25/12
0.7 0.10 Tue 5/8/12 0.60 Mon 5/28/12
Frequency
0.10 0.6 0.15 Wed 5/9/12 0.65 Wed 5/30/12
0.08 0.5 0.20 Mon 5/14/12 0.70 Wed 5/30/12
0.4 0.25 Tue 5/15/12 0.75 Fri 6/1/12
0.06
0.3 0.30 Thu 5/17/12 0.80 Mon 6/4/12
0.04 0.35 Fri 5/18/12 0.85 Wed 6/6/12
0.2
0.02
0.40 Mon 5/21/12 0.90 Fri 6/8/12
0.1 0.45 Wed 5/23/12 0.95 Thu 6/14/12
Tue 4/24/12 Thu 5/24/12 Wed 6/27/12
0.50 Wed 5/23/12 1.00 Wed 6/27/12
Completion Date
60. 60 Basic Principles of Probabilistic Cost
Now that the schedule can be produced using
probabilistic methods, it’s time to talk about the cost.
Cost does not have a linear relationship with
schedule unfortunately.
: Basic Principles of Probabilistic Cost
61. Basic Principles with Probabilistic Cost
Estimating are coupled with scheduling
61
Cost estimates usually involve many CERs
Each of these CERs has uncertainty (standard error)
CER input variables have uncertainty (technical uncertainty)
Must combine CER uncertainty with technical uncertainty for
many CERs in an estimate
Usually cannot be done arithmetically; must use simulation to roll
up costs derived from Monte Carlo samples
Add and multiply probability distributions rather than numbers
Statistically combining many uncertain, or randomly varying, numbers
Monte Carlo simulation
Take random sample from each CER and input parameter, add and
multiply as necessary, then record total system cost as a single sample
Repeat the procedure thousands of times to develop a frequency
histogram of the total system cost samples
This becomes the probability distribution of total system cost
: Basic Principles of Probabilistic Cost
62. The Cost Probability Distributions as a
function of the weighted cost drivers
62
Combined Cost Modeling
and Technical Uncertainty
Cost = a + bXc
Cost Modeling Uncertainty
Cost
Estimate
Historical data point
$
Cost estimating relationship
Technical Uncertainty Standard percent error bounds
Cost Driver (Weight)
: Basic Principles of Probabilistic Cost
63. The Risk Adjusted Cost Estimate
Connected To The IMS
63
In the risk–adjusted cost estimate, we now combine
discrete risk events and the uncertainty of the input
distributions with the uncertainty of the CERs
Since the input distributions tend to be right–skewed,
the expected cost tends to be larger than the baseline
estimate
In addition, the risk–adjusted cost distribution tends to
be wider than the baseline estimate
The difference between the expected cost of the risk–
adjusted estimate and the expected cost of the
baseline estimate is, by definition, the amount of RISK
dollars included in the risk–adjusted estimate
: Basic Principles of Probabilistic Cost
64. Baseline versus Risk Adjusted Cost
Estimates Usually Show a Cost Increase
64
Baseline vs. Risk-Adjusted Estimates
Baseline:
Mean = $102.6M
Std Dev = $29.8M
Risk–Adjusted:
Mean = $122.6M
Likelihood
Std Dev = $42.8M
0 50 100 150 200 250 300 350
FY$M
: Basic Principles of Probabilistic Cost
65. The S–Curve for Cost Modeling
65 Cumulative Distribution Function
100%
90%
80%
80th percentile
Cumulative Probability
70%
50th percentile $153.5M
60% $114.7M
50%
Baseline Estimate
Mean $102.6M Risk–adjusted
40% Estimate Mean
$122.6M
30%
20%
10%
0%
$60 $80 $100 $120 $140 $160 $180 $200
FY00$M
: Basic Principles of Probabilistic Cost
66. The Real Question Always Returns to…
“But How Much Does It Cost? Really?”
66
This is impossible to answer precisely
Decision–makers and cost analysts should always think
of a cost estimate as a probability distribution, NOT as
a deterministic number
The best we can provide is the probability distribution –
If we think we can be any more precise, we’re fooling
ourselves
It is up to the decision–maker to decide where he/she
wants to set the budget
The probability distribution provides a quantitative
basis for making this determination
Low budget = high probability of overrun
High budget = low probability of overrun
: Basic Principles of Probabilistic Cost
68. 68 Some More Parts to using Risk+
Just having the pictures is necessary, but knowing
what they mean is required.
Making changes to the IMS to increase the
Probability of Program Success is the primary
outcome from Monte Carlo Simulation.
69. Without Integrating $, Time, and TPM
you’re driving in the rearview mirror
69
Technical
Performance (TPM)
71. Statistics of a Triangle Distribution
71
50% of all possible values are
under this area of the curve. This
is the definition of the median
Minimum Maximum
1000 hrs 6830 hrs
Mode = 2000 hrs Mean = 3879 hrs
Median = 3415 hrs
Basic Statistics
72. TPM Trends & Responses directly
impact risk and credibility of the IMS
72
Design Model
ROM in Proposal Detailed Design Model
Bench Scale Model Measurement
Technical Performance Measure
28kg
Prototype Measurement
Vehicle Weight
Flight 1st Article
26kg
25kg
23kg
CA SFR SRR PDR CDR TRR
Dr. Falk Chart – modified
73. Not A Mitigation Plan
Mitigation is too late, the risk has
turned into an issue. The money
has been spent, and the time has
passed.
73
74. Ordinal versus Cardinal
74
Ordinal Cardinal
A variable is ordinally measurable A variable is cardinally measurable
if ranking is possible for values of if a given interval between
the variable. For example, a gold measures has a consistent meaning,
medal reflects superior performance i.e., if the measure corresponds to
to a silver or bronze medal in the points along a straight line. For
Olympics, or you may prefer French example, height, output, and income
toast to waffles, and waffles to oat are cardinally measurable.
bran muffins. All variables that are
cardinally measurable are also
ordinally measurable, although the
reverse may not be true.
75. Correcting Ordinal Risk Scales
75
Classify and calibrate risk ranking in units
meaningful to the decision makers
Risk rank 1, 2, 3, 4, is NOT sufficient
The Risk Rank must have a measurable value connected
to the actual behavior of the system being assessed
Calibration coefficients between ordinal probability
and consequences should also be used.
Ordinal analysis assumes ordering of the risks.
Cardinal analysis provides objective measures of
probability and consequential impact.
76. Level Likelihood Value
Never multiply Likelihood E
E Near Certainty E ≥ 90%
by outcome. They are not
“numbers,” they a D Highly Likely 74% ≤ D ≤ 90% D
probability distributions. C Likely 40% ≤ C ≤ 60% C
Only convolution is
B Low Likelihood 20% ≤ B ≤ 40% B
possible
A Not Likely A ≤ 20% A
These are Cardinal measures of probability of occurrence and A B C D E
consequential impact
Level Technical Performance Schedule Cost
Minimal or no consequence to
A Minimal or no impact Minimal or no impact
technical performance.
Budget increase or unit
Minor reduction in technical
B Able to meet key dates production cost increases.
performance or supportability.
< (1% of Budget)
Moderate reduction in Minor schedule slip.
Budget increase or unit
technical performance or Able to meet key
C production cost increase
supportability with limited milestones with no
< (5% of Budget)
impact on program objectives. schedule float.
Significant degradation in
Budget increase or unit
technical performance or Program critical path
D production cost increase
major shortfall in affected
< (10% of Budget)
supportability.
Cannot meet key Exceeds budget increase or
Severe degradation in technical
E program milestones. unit production cost
performance. 76
Slip > X months threshold
77. Example of
Ordinal Probability Complexity Scale†
77
Definition of the Ordinal Scale Ranking Scale Level
Greater than 20% of the interface design has been
E
altered because of modifications to the ICD’s.
Greater than 15% but less than 20% of the interface
design has been altered because of modifications of the D
ICD’s.
Greater than 10% but less than 15% of the interface
design has been altered because of modifications of the C
ICD’s.
At least 5% but less than 10% of the interface design has
B
been altered because of modifications of the ICD’s.
At least 5% of the interface design has been altered
A
because of modifications of the ICD’s.
† Effective Risk Management: Some Keys to Success, Edmund Conrow, AIAA Press, 2003
78. A “real” risk Ordinal Ranking Table
78
Risk
Percent Variance Interpretation of Risk Ranking
Rank
Normal business, technical & manufacturing
A – 5% ≤ A ≤ 10%
processes are applied
Normal business & technical processes are
B – 5% ≤ B≤ 15% applied; new or innovative manufacturing
processes
Flight software development & certification
C – 5% ≤ C ≤ 35%
processes
Build & qualification of flight components,
D – 10% ≤ D ≤ 25%
subsystems & systems
E – 10% ≤ E ≤ 35% Flight software qualification
F – 5% ≤ F ≤ 175% ISS thermal vacuum acceptance testing
79. Project Train Wrecks Occur When There is…
Inattention to budgetary
responsibilities
Work authorizations that are
not always followed
Issues with Budget and data
reconciliation
Lack of an integrated
management system
Baseline fluctuations and
frequent replanning
Untimely and unrealistic Latest Revised
Current period and retroactive
Estimates (LRE)
changes
Progress not monitored in a regular and
Improper use of management
consistent manner
reserve
Lack of vertical and horizontal traceability
EV techniques that do not
cost and schedule data for corrective action
reflect actual performance
Lack of internal surveillance and controls
Lack of predictive variance
Managerial actions not demonstrated using
analysis
Earned Value 79
80. Our Final Check List
80
Set up the Risk+ fields, flags, views, and tables for the
program standard IMS.
Build an IMS that passes the DCMA 14 Point Assessment
with all GREEN.
Build the Ordinal Risk Ranking table for the various risk
categories on the program.
Assign risk ranking to each activities in the IMS, with the
variances defined in the Ordinal Table.
Run Risk+ to see the confidence in the deliverables.
Develop the needed schedule margin to protect the
delivery to at least the 80% confidence level.
81. Advice from the school of hard knocks
81
Put margin in front of
critical deliverables.
Build a margin burn
down chart and
allocate schedule
margin just like you do
MR for the PMB.
This real world advice
is counter to the
current DCMA
guidance.
84. Managing margin is what Risk+ is all
about
84
CP Total Float
Float Erosion: Critical Path Time Usage Acceptable Rate of Float Erosion
Linear (CP Total Float )
100
80 Time Now
October 31, 2005
Critical Path - Time Reserve
60
40 Spacecraft
Contract Delivery
December 10, 2007
20
0
-20
-40
85. How much margin do we need?
85
The Missing Link: Schedule Margin Management, Rick Price, PS–10, PMI–CPM EVM World 2008
86. Deterministic versus Probabilistic
86
Baseline
Plan
Sep 2011
Oct 2011
Current Plan
with risks is the
Ready deterministic schedule
Nov 2001
Early Plan
Margin
Dec 2011 Launch
20% Risk
Period Margin
Jan 2012
Mean Current Plan
with risks is the
Feb 2012 stochastic schedule
The probability
distribution can
80%
Mar 2012 vary as a Missed
function of time
Launch
ATLO
Period
CDR
PDR
FRR
SRR
Apr 2012
89. References
89
“Protecting Earned Value with Schedule Margin,”
http://www.prochain.com/pm/articles/ProtectingEVSchedules.pdf
Depicting Schedule Margin in the Integrated Master Schedule,
http://www.ndia.org/Divisions/Divisions/Procurement/Documents/PMSCommittee/C
ommitteeDocuments/WhitePapers/NDIAScheduleMarginWhitePaperFinal-
2010(2).pdf
Effective Risk Management: Some Keys to Success, Second Edition, Edmund Conrow,
AIAA Press.
How to Lie with Statistics, Darrell Huff, Norton, 1954 (Available in paper back at
any good book store)
DID DI–MGMT–81650 “A management method for accommodating schedule
contingencies. It is a designated buffer and shall be identified separately and
considered part of the baseline.
90. References
90
Interfacing Risk and Earned Value Management, Association for Project Management,
150 West Wycombe Road, High Wycombe, Buckinghamshire, HP12 3AE, United
Kingdom.
Practice Standard for Earned Value Management, Second Edition, Project
Management Institute, 2011.
Effective Opportunity Management for Projects, David Hillson, Taylor and Francis,
2004.
Measuring Time: Improving Project Performance Using Earned Value, Mario
Vanhoucke, Springer, 2009.
Performance Based Earned Value, Paul Solomon and Ralph Young, Wiley, 2007.
Effective Risk Management: Some Keys to Success, Edmund Conrow, AIAA Press,
2003.