1) Classical Systems Engineering and Program Management (CSEPM) aims to satisfy all system needs throughout development but in practice faces issues due to undefined requirements, inefficient subcontracting, and "sloppy" requirements.
2) A key flawed assumption of CSEPM is that all interfaces in complex systems can be defined, but this is unrealistic given human factors and unpredictability. Defining interfaces has led to problems with flowing down requirements and managing subcontractors.
3) The "Vee" model used in CSEPM proves unreliable for complex programs with thousands of requirements, as the critical assumption that all knowledge exists early on to define interfaces is often inaccurate.
How Should We Estimate Agile Software Development Projects and What Data Do W...Glen Alleman
Estimating techniques for an acquisition program progresses from analogies to actual cost method as the program matures and more information is known. The analogy method is most appropriate early in the program life cycle when the system is not yet fully defined.
How did Software Got So Reliable Without Proof?mustafa sarac
This document summarizes the key techniques that have enabled software to become more reliable without extensive use of formal proof methods. It argues that software reliability has improved due to:
1) Rigorous management processes such as design reviews, quality assurance testing, and continuous error removal from existing software.
2) Techniques like defensive programming and over-engineering that increase reliability without proof.
3) Formal methods providing conceptual frameworks and basic understanding to support best practices and guide future improvement, even if not used directly in large-scale programming.
How Did Software Get So Reliable Without Proof?mustafa sarac
This document summarizes the key practices that have enabled software to become more reliable without extensive use of formal proof methods. It argues that software reliability has improved due to rigorous management processes like design reviews, quality assurance testing, and evolutionary improvement of existing software. While formal methods play a small direct role, they provide conceptual frameworks that support current best practices and point to future improvements.
The document proposes updated definitions for technology, manufacturing, and services readiness levels based on lean product development principles. It argues the current definitions promote a flawed "build-test-fix" approach and presents alternative "Lean TRL", "Lean MRL", and "SRL" definitions grounded in robust design, design for six sigma, and lean principles. The updated levels aim to characterize and validate performance earlier to reduce costly late iterations compared to the conventional approach.
The document discusses project management challenges, specifically related to changes, and how Fluor developed a system dynamics model-based "Change Impact Assessment" system to help address these challenges. Key points:
- Changes are a major cause of cost overruns and schedule delays on projects, but traditional change management systems ignore secondary impacts of changes.
- Fluor implemented a system dynamics model tailored for each project to analyze how changes could impact costs and schedules, and to test mitigation strategies.
- This "Change Impact Assessment" system has been used on over 100 Fluor projects globally, saving Fluor and its clients over $1.3 billion to date. The model and process have been successfully adopted within the organization.
The white paper discusses contingency, defining it as an allowance to mitigate project risks from incomplete scope definition. It recommends running contingency down linearly over the project as risks decrease with completed tasks. The paper outlines appropriate uses like funding approved scope changes. It warns against misuses like using contingency as a "slush fund" to cover poor performance or budget overruns, which hides issues and prevents accountability. Maintaining contingency as a risk allowance and running it down responsibly provides transparency and helps manage the project budget.
A new model for software costestimationijfcstjournal
Accurate and realistic estimation is always considered to be a great challenge in software industry.
Software Cost Estimation (SCE) is the standard application used to manage software projects. Determining
the amount of estimation in the initial stages of the project depends on planning other activities of the
project. In fact, the estimation is confronted with a number of uncertainties and barriers’, yet assessing the
previous projects is essential to solve this problem. Several models have been developed for the analysis of
software projects. But the classical reference method is the COCOMO model, there are other methods
which are also applied such as Function Point (FP), Line of Code(LOC); meanwhile, the expert`s opinions
matter in this regard. In recent years, the growth and the combination of meta-heuristic algorithms with
high accuracy have brought about a great achievement in software engineering. Meta-heuristic algorithms
which can analyze data from multiple dimensions and identify the optimum solution between them are
analytical tools for the analysis of data. In this paper, we have used the Harmony Search (HS)algorithm for
SCE. The proposed model which is a collection of 60 standard projects from Dataset NASA60 has been
assessed.The experimental results show that HS algorithm is a good way for determining the weight
similarity measures factors of software effort, and reducing the error of MRE.
A NEW MODEL FOR SOFTWARE COSTESTIMATION USING HARMONY SEARCHijfcstjournal
Accurate and realistic estimation is always considered to be a great challenge in software industry.
Software Cost Estimation (SCE) is the standard application used to manage software projects. Determining
the amount of estimation in the initial stages of the project depends on planning other activities of the
project. In fact, the estimation is confronted with a number of uncertainties and barriers’, yet assessing the
previous projects is essential to solve this problem. Several models have been developed for the analysis of
software projects. But the classical reference method is the COCOMO model, there are other methods
which are also applied such as Function Point (FP), Line of Code(LOC); meanwhile, the expert`s opinions
matter in this regard. In recent years, the growth and the combination of meta-heuristic algorithms with
high accuracy have brought about a great achievement in software engineering. Meta-heuristic algorithms
which can analyze data from multiple dimensions and identify the optimum solution between them are
analytical tools for the analysis of data. In this paper, we have used the Harmony Search (HS)algorithm for
SCE. The proposed model which is a collection of 60 standard projects from Dataset NASA60 has been
assessed.The experimental results show that HS algorithm is a good way for determining the weight
similarity measures factors of software effort, and reducing the error of MRE.
How Should We Estimate Agile Software Development Projects and What Data Do W...Glen Alleman
Estimating techniques for an acquisition program progresses from analogies to actual cost method as the program matures and more information is known. The analogy method is most appropriate early in the program life cycle when the system is not yet fully defined.
How did Software Got So Reliable Without Proof?mustafa sarac
This document summarizes the key techniques that have enabled software to become more reliable without extensive use of formal proof methods. It argues that software reliability has improved due to:
1) Rigorous management processes such as design reviews, quality assurance testing, and continuous error removal from existing software.
2) Techniques like defensive programming and over-engineering that increase reliability without proof.
3) Formal methods providing conceptual frameworks and basic understanding to support best practices and guide future improvement, even if not used directly in large-scale programming.
How Did Software Get So Reliable Without Proof?mustafa sarac
This document summarizes the key practices that have enabled software to become more reliable without extensive use of formal proof methods. It argues that software reliability has improved due to rigorous management processes like design reviews, quality assurance testing, and evolutionary improvement of existing software. While formal methods play a small direct role, they provide conceptual frameworks that support current best practices and point to future improvements.
The document proposes updated definitions for technology, manufacturing, and services readiness levels based on lean product development principles. It argues the current definitions promote a flawed "build-test-fix" approach and presents alternative "Lean TRL", "Lean MRL", and "SRL" definitions grounded in robust design, design for six sigma, and lean principles. The updated levels aim to characterize and validate performance earlier to reduce costly late iterations compared to the conventional approach.
The document discusses project management challenges, specifically related to changes, and how Fluor developed a system dynamics model-based "Change Impact Assessment" system to help address these challenges. Key points:
- Changes are a major cause of cost overruns and schedule delays on projects, but traditional change management systems ignore secondary impacts of changes.
- Fluor implemented a system dynamics model tailored for each project to analyze how changes could impact costs and schedules, and to test mitigation strategies.
- This "Change Impact Assessment" system has been used on over 100 Fluor projects globally, saving Fluor and its clients over $1.3 billion to date. The model and process have been successfully adopted within the organization.
The white paper discusses contingency, defining it as an allowance to mitigate project risks from incomplete scope definition. It recommends running contingency down linearly over the project as risks decrease with completed tasks. The paper outlines appropriate uses like funding approved scope changes. It warns against misuses like using contingency as a "slush fund" to cover poor performance or budget overruns, which hides issues and prevents accountability. Maintaining contingency as a risk allowance and running it down responsibly provides transparency and helps manage the project budget.
A new model for software costestimationijfcstjournal
Accurate and realistic estimation is always considered to be a great challenge in software industry.
Software Cost Estimation (SCE) is the standard application used to manage software projects. Determining
the amount of estimation in the initial stages of the project depends on planning other activities of the
project. In fact, the estimation is confronted with a number of uncertainties and barriers’, yet assessing the
previous projects is essential to solve this problem. Several models have been developed for the analysis of
software projects. But the classical reference method is the COCOMO model, there are other methods
which are also applied such as Function Point (FP), Line of Code(LOC); meanwhile, the expert`s opinions
matter in this regard. In recent years, the growth and the combination of meta-heuristic algorithms with
high accuracy have brought about a great achievement in software engineering. Meta-heuristic algorithms
which can analyze data from multiple dimensions and identify the optimum solution between them are
analytical tools for the analysis of data. In this paper, we have used the Harmony Search (HS)algorithm for
SCE. The proposed model which is a collection of 60 standard projects from Dataset NASA60 has been
assessed.The experimental results show that HS algorithm is a good way for determining the weight
similarity measures factors of software effort, and reducing the error of MRE.
A NEW MODEL FOR SOFTWARE COSTESTIMATION USING HARMONY SEARCHijfcstjournal
Accurate and realistic estimation is always considered to be a great challenge in software industry.
Software Cost Estimation (SCE) is the standard application used to manage software projects. Determining
the amount of estimation in the initial stages of the project depends on planning other activities of the
project. In fact, the estimation is confronted with a number of uncertainties and barriers’, yet assessing the
previous projects is essential to solve this problem. Several models have been developed for the analysis of
software projects. But the classical reference method is the COCOMO model, there are other methods
which are also applied such as Function Point (FP), Line of Code(LOC); meanwhile, the expert`s opinions
matter in this regard. In recent years, the growth and the combination of meta-heuristic algorithms with
high accuracy have brought about a great achievement in software engineering. Meta-heuristic algorithms
which can analyze data from multiple dimensions and identify the optimum solution between them are
analytical tools for the analysis of data. In this paper, we have used the Harmony Search (HS)algorithm for
SCE. The proposed model which is a collection of 60 standard projects from Dataset NASA60 has been
assessed.The experimental results show that HS algorithm is a good way for determining the weight
similarity measures factors of software effort, and reducing the error of MRE.
The failure of the LASCAD project was due to poor mid-level management in several areas: project integration, scope, time, cost, quality, human resource, communications, and risk management. Specifically, the planning team did not properly involve key stakeholders like ambulance crews. The project scope and schedule were too aggressive. The contract was awarded to an inexperienced supplier based solely on cost. There was insufficient quality assurance and user training. These issues stemmed from mid-level management decisions and resulted in the system crashing when deployed.
Shell is using business simulation software to improve its front-end planning processes for oil and gas projects. The software allows for faster modeling and scenario analysis compared to traditional spreadsheet methods. It also facilitates integrative planning across subsurface, surface, and economic domains. The new approach aims to reduce time spent on opportunity evaluations and planning while maintaining understanding of complex projects. Shell managers emphasize that successful adoption requires changes to workflows and thinking, not just the software itself.
This document discusses the transition from traditional waterfall software development models to more agile approaches like Scrum and Kanban. It outlines some key limitations of the waterfall model, including unrealistic assumptions about requirements stability and integration challenges. Many software projects adopting waterfall experienced late delivery, changing requirements issues, and customer dissatisfaction. More iterative agile methods like Scrum and Kanban address these issues by emphasizing working software over documentation, incremental delivery, and flexibility. Studies show higher success rates for agile projects compared to waterfall. Large organizations are increasingly adopting agile practices across many teams and projects.
Determining Costs of Construction Errors, Based on Fuzzy Logic SystemsMohammad Lemar ZALMAİ
In construction projects, construction errors affect negatively to the production, that influences the overall of the project in both time and budget. Generally, construction companies could not estimate this kind of errors during the bidding process. In this case, these companies did not consider important issues on the budget of the contract, and in the contracting period, project participants assumed that the project would be executed as it scheduled and designed. During the project, different construction processes’ costs are higher than estimated values due to construction errors.
The errors that were recognized during the construction process cause time and financial losses, on the other hand, the errors that were noticed after the project’s termination cause repair and correction costs. Moreover, the company may gain a bad reputation in the sector.
The key points of this study are to analyze project costs by considering construction errors and re-construction costs due to labor errors by using fuzzy interpretation mechanism. This methodology is applied to a residential construction project. With using of this methodology, forthcoming extra costs related to construction errors can be estimated. And some precautions can be taken for further legal conflicts between parties.
The document discusses issues with the London Ambulance Services' (LAS) implementation of a new Computer Aided Dispatch (CAD) system in the early 1990s. Key problems included a lack of thorough requirements analysis, unrealistic timelines, unclear roles and responsibilities, and poor project execution. Lessons include the importance of defining requirements, selecting suppliers based on more than price, implementing a structured project management methodology, and taking a phased approach to large projects. Organizations can avoid similar issues by strengthening project management capabilities, following a software development lifecycle, and ensuring proper governance and stakeholder buy-in for transformations.
This document contains 10 questions related to project management best practices. It provides the questions, suggested answers, and brief explanations or hints for each question. The questions cover topics such as project initiation processes, managing project changes, scheduling, resource allocation, risk management, and stakeholder engagement. The explanations reference concepts and processes from the PMBOK Guide, such as the importance of proper contract drafting for issues like charging for productive vs unproductive time.
COMPARATIVE STUDY OF SOFTWARE ESTIMATION TECHNIQUES ijseajournal
Many information technology firms among other organizations have been working on how to perform estimation of the sources such as fund and other resources during software development processes. Software development life cycles require lot of activities and skills to avoid risks and the best software estimation technique is supposed to be employed. Therefore, in this research, a comparative study was conducted, that consider the accuracy, usage, and suitability of existing methods. It will be suitable for the project managers and project consultants during the whole software project development process. In this project technique such as linear regression; both algorithmic and non-algorithmic are applied. Model, composite and regression techniques are used to derive COCOMO, COCOMO II, SLIM and linear multiple respectively. Moreover, expertise-based and linear-based rules are applied in non-algorithm methods. However, the technique needs some advancement to reduce the errors that are experienced during the software development process. Therefore, this paper in relation to software estimation techniques has proposed a model that can be helpful to the information technology firms, researchers and other firms that use information technology in the processes such as budgeting and decision-making processes.
This document discusses the perceived discord between CMMI and Agile development methods. It argues that the discord stems from two main reasons: 1) Early adopters of each approach represented extremes that set negative tones, and 2) Misperceptions arose due to misuse, lack of accurate information, and differing terminology between the approaches. The document seeks to clarify misperceptions, discuss the value of both approaches, and argue that when properly used together, CMMI and Agile can improve software development performance. It calls for experts of each approach to work on understanding and combining the approaches.
Delay Analysis - Correcting for Programme ErrorsDavid Greig
Should a programme containing errors be corrected prior to undertaking delay analysis?
How and when should any corrections be made? Or perhaps, not at all!
These are some of the questions the delay analyst might face when considering a contractor's programme which contains errors.
1) The document describes a probabilistic cost estimation method for complex projects like deepwater well abandonment that accounts for uncertainty. It uses Monte Carlo simulation to model individual cost elements as distributions rather than single values.
2) An example probabilistic cost estimate for abandoning a deepwater well outlines the work breakdown structure and assigns minimum, most likely, and maximum times to each task based on triangular distributions. Rig rates are also varied.
3) The results of simulating the model multiple times allow determining the overall cost distribution and risk of cost overruns for the well abandonment project. This provides a more accurate picture of potential costs than conventional deterministic estimates.
Analysis of software cost estimation usingijfcstjournal
The growing application of software and resource constraints in software projects development need a
more accurate estimate of the cost and effort because of the importance in program planning, coordinated
scheduling and resource management including the number of programming's and software design using
tools and modern methods of modeling. Effectively control of investment for software development is
achieved by accurate cost estimation.The accurate Software Cost Estimation (SCE) is very difficult in the
early stages of software development because many of input parameters that are effective in software's
effort are very vague and uncertain in the early stages. SCE that is the basis of software projects
development planning is considered to be of high accuracy, because if the estimate is less than actual
values, confidence factor is reduce and this is means the possibility of failure in project. Conversely, if the
project is estimated at more than the actual value it would be the concept of unhelpful investment and
waste of resources. In the evaluation of software projects is commonly used deterministic method. But
software world is totally different from the linear variables and nowadays for performance and estimation
should be used nonlinear and non-probabilistic methods. In this paper, we have studied the SCE Using
Fuzzy Logic (FL) and we have compared it with COCOMO model. Results of investigations show that FL is
a performance model for SCE.
Delay Analysis - Correcting for Programme ErrorsDavid Greig
Should a programme containing errors be corrected prior to undertaking delay analysis?
How and when should any corrections be made? Or perhaps, not at all!
These are some of the questions the delay analyst might face when considering a contractor's programme which contains errors.
The document discusses the most common reasons why software projects fail and provides suggestions on how to address them. It begins by explaining that while developers are often blamed for failures, the root causes are typically flawed estimation or poor business decision making. It then lists the top five reasons for failure: 1) Accepting forced schedules without sufficient analysis, 2) Underestimating the level of effort required, 3) Changing requirements and scope mid-project, 4) Lack of executive support and engagement, and 5) Not involving stakeholders early. The document stresses the importance of realistic estimates, change management processes, executive sponsorship, and stakeholder involvement to help projects succeed.
This document discusses software estimation, negotiation, and demand management. It begins by noting that while software development has been around for decades, accurately estimating project timelines remains challenging. Unlike manufacturing, software development is non-linear and adding resources does not necessarily shorten schedules. The document advocates using data-driven methods to set realistic expectations and improve long-term planning and decision making for software projects. It aims to help executives better understand software management processes to control costs and ensure project success.
The document provides an introduction to software requirements engineering. It defines what requirements are, explains that software requirements describe what a system will do without describing how, and lists sources of requirements like stakeholders and existing systems. The document emphasizes that getting requirements right is extremely important because errors are costly to fix later in development. It also discusses types of software like information systems, commercial applications, and embedded systems.
The document discusses software requirements and documentation. It states that properly documenting requirements is crucial to avoid mistakes during development. Requirements analysis involves gathering and analyzing requirements, then specifying them in a document. This ensures developers understand the problem and can develop a satisfactory solution. The document also discusses data flow modeling, object-oriented modeling, prototyping techniques, and classifying requirements as functional or non-functional.
Carl Gustav Hempel (1905 – 1997) posed a paradox. If we want to prove the hypotheses such as “all ravens are black,” we can look for many ravens and determine of they all meet the blackness criteria.
This document discusses the DCMA 14-Point Schedule Assessment protocol used to review federal contract schedules. It provides background on the DCMA and describes each of the 14 assessment checks. While the goal of increasing schedule rigor is valid, the author questions whether some checks like banning negative lags or requiring over 90% finish-to-start relationships are sufficiently supported. Implementing the checks too rigidly could open the government to claims rather than improving communication as intended.
The failure of the LASCAD project was due to poor mid-level management in several areas: project integration, scope, time, cost, quality, human resource, communications, and risk management. Specifically, the planning team did not properly involve key stakeholders like ambulance crews. The project scope and schedule were too aggressive. The contract was awarded to an inexperienced supplier based solely on cost. There was insufficient quality assurance and user training. These issues stemmed from mid-level management decisions and resulted in the system crashing when deployed.
Shell is using business simulation software to improve its front-end planning processes for oil and gas projects. The software allows for faster modeling and scenario analysis compared to traditional spreadsheet methods. It also facilitates integrative planning across subsurface, surface, and economic domains. The new approach aims to reduce time spent on opportunity evaluations and planning while maintaining understanding of complex projects. Shell managers emphasize that successful adoption requires changes to workflows and thinking, not just the software itself.
This document discusses the transition from traditional waterfall software development models to more agile approaches like Scrum and Kanban. It outlines some key limitations of the waterfall model, including unrealistic assumptions about requirements stability and integration challenges. Many software projects adopting waterfall experienced late delivery, changing requirements issues, and customer dissatisfaction. More iterative agile methods like Scrum and Kanban address these issues by emphasizing working software over documentation, incremental delivery, and flexibility. Studies show higher success rates for agile projects compared to waterfall. Large organizations are increasingly adopting agile practices across many teams and projects.
Determining Costs of Construction Errors, Based on Fuzzy Logic SystemsMohammad Lemar ZALMAİ
In construction projects, construction errors affect negatively to the production, that influences the overall of the project in both time and budget. Generally, construction companies could not estimate this kind of errors during the bidding process. In this case, these companies did not consider important issues on the budget of the contract, and in the contracting period, project participants assumed that the project would be executed as it scheduled and designed. During the project, different construction processes’ costs are higher than estimated values due to construction errors.
The errors that were recognized during the construction process cause time and financial losses, on the other hand, the errors that were noticed after the project’s termination cause repair and correction costs. Moreover, the company may gain a bad reputation in the sector.
The key points of this study are to analyze project costs by considering construction errors and re-construction costs due to labor errors by using fuzzy interpretation mechanism. This methodology is applied to a residential construction project. With using of this methodology, forthcoming extra costs related to construction errors can be estimated. And some precautions can be taken for further legal conflicts between parties.
The document discusses issues with the London Ambulance Services' (LAS) implementation of a new Computer Aided Dispatch (CAD) system in the early 1990s. Key problems included a lack of thorough requirements analysis, unrealistic timelines, unclear roles and responsibilities, and poor project execution. Lessons include the importance of defining requirements, selecting suppliers based on more than price, implementing a structured project management methodology, and taking a phased approach to large projects. Organizations can avoid similar issues by strengthening project management capabilities, following a software development lifecycle, and ensuring proper governance and stakeholder buy-in for transformations.
This document contains 10 questions related to project management best practices. It provides the questions, suggested answers, and brief explanations or hints for each question. The questions cover topics such as project initiation processes, managing project changes, scheduling, resource allocation, risk management, and stakeholder engagement. The explanations reference concepts and processes from the PMBOK Guide, such as the importance of proper contract drafting for issues like charging for productive vs unproductive time.
COMPARATIVE STUDY OF SOFTWARE ESTIMATION TECHNIQUES ijseajournal
Many information technology firms among other organizations have been working on how to perform estimation of the sources such as fund and other resources during software development processes. Software development life cycles require lot of activities and skills to avoid risks and the best software estimation technique is supposed to be employed. Therefore, in this research, a comparative study was conducted, that consider the accuracy, usage, and suitability of existing methods. It will be suitable for the project managers and project consultants during the whole software project development process. In this project technique such as linear regression; both algorithmic and non-algorithmic are applied. Model, composite and regression techniques are used to derive COCOMO, COCOMO II, SLIM and linear multiple respectively. Moreover, expertise-based and linear-based rules are applied in non-algorithm methods. However, the technique needs some advancement to reduce the errors that are experienced during the software development process. Therefore, this paper in relation to software estimation techniques has proposed a model that can be helpful to the information technology firms, researchers and other firms that use information technology in the processes such as budgeting and decision-making processes.
This document discusses the perceived discord between CMMI and Agile development methods. It argues that the discord stems from two main reasons: 1) Early adopters of each approach represented extremes that set negative tones, and 2) Misperceptions arose due to misuse, lack of accurate information, and differing terminology between the approaches. The document seeks to clarify misperceptions, discuss the value of both approaches, and argue that when properly used together, CMMI and Agile can improve software development performance. It calls for experts of each approach to work on understanding and combining the approaches.
Delay Analysis - Correcting for Programme ErrorsDavid Greig
Should a programme containing errors be corrected prior to undertaking delay analysis?
How and when should any corrections be made? Or perhaps, not at all!
These are some of the questions the delay analyst might face when considering a contractor's programme which contains errors.
1) The document describes a probabilistic cost estimation method for complex projects like deepwater well abandonment that accounts for uncertainty. It uses Monte Carlo simulation to model individual cost elements as distributions rather than single values.
2) An example probabilistic cost estimate for abandoning a deepwater well outlines the work breakdown structure and assigns minimum, most likely, and maximum times to each task based on triangular distributions. Rig rates are also varied.
3) The results of simulating the model multiple times allow determining the overall cost distribution and risk of cost overruns for the well abandonment project. This provides a more accurate picture of potential costs than conventional deterministic estimates.
Analysis of software cost estimation usingijfcstjournal
The growing application of software and resource constraints in software projects development need a
more accurate estimate of the cost and effort because of the importance in program planning, coordinated
scheduling and resource management including the number of programming's and software design using
tools and modern methods of modeling. Effectively control of investment for software development is
achieved by accurate cost estimation.The accurate Software Cost Estimation (SCE) is very difficult in the
early stages of software development because many of input parameters that are effective in software's
effort are very vague and uncertain in the early stages. SCE that is the basis of software projects
development planning is considered to be of high accuracy, because if the estimate is less than actual
values, confidence factor is reduce and this is means the possibility of failure in project. Conversely, if the
project is estimated at more than the actual value it would be the concept of unhelpful investment and
waste of resources. In the evaluation of software projects is commonly used deterministic method. But
software world is totally different from the linear variables and nowadays for performance and estimation
should be used nonlinear and non-probabilistic methods. In this paper, we have studied the SCE Using
Fuzzy Logic (FL) and we have compared it with COCOMO model. Results of investigations show that FL is
a performance model for SCE.
Delay Analysis - Correcting for Programme ErrorsDavid Greig
Should a programme containing errors be corrected prior to undertaking delay analysis?
How and when should any corrections be made? Or perhaps, not at all!
These are some of the questions the delay analyst might face when considering a contractor's programme which contains errors.
The document discusses the most common reasons why software projects fail and provides suggestions on how to address them. It begins by explaining that while developers are often blamed for failures, the root causes are typically flawed estimation or poor business decision making. It then lists the top five reasons for failure: 1) Accepting forced schedules without sufficient analysis, 2) Underestimating the level of effort required, 3) Changing requirements and scope mid-project, 4) Lack of executive support and engagement, and 5) Not involving stakeholders early. The document stresses the importance of realistic estimates, change management processes, executive sponsorship, and stakeholder involvement to help projects succeed.
This document discusses software estimation, negotiation, and demand management. It begins by noting that while software development has been around for decades, accurately estimating project timelines remains challenging. Unlike manufacturing, software development is non-linear and adding resources does not necessarily shorten schedules. The document advocates using data-driven methods to set realistic expectations and improve long-term planning and decision making for software projects. It aims to help executives better understand software management processes to control costs and ensure project success.
The document provides an introduction to software requirements engineering. It defines what requirements are, explains that software requirements describe what a system will do without describing how, and lists sources of requirements like stakeholders and existing systems. The document emphasizes that getting requirements right is extremely important because errors are costly to fix later in development. It also discusses types of software like information systems, commercial applications, and embedded systems.
The document discusses software requirements and documentation. It states that properly documenting requirements is crucial to avoid mistakes during development. Requirements analysis involves gathering and analyzing requirements, then specifying them in a document. This ensures developers understand the problem and can develop a satisfactory solution. The document also discusses data flow modeling, object-oriented modeling, prototyping techniques, and classifying requirements as functional or non-functional.
Carl Gustav Hempel (1905 – 1997) posed a paradox. If we want to prove the hypotheses such as “all ravens are black,” we can look for many ravens and determine of they all meet the blackness criteria.
This document discusses the DCMA 14-Point Schedule Assessment protocol used to review federal contract schedules. It provides background on the DCMA and describes each of the 14 assessment checks. While the goal of increasing schedule rigor is valid, the author questions whether some checks like banning negative lags or requiring over 90% finish-to-start relationships are sufficiently supported. Implementing the checks too rigidly could open the government to claims rather than improving communication as intended.
1. CSEPM Final Essay
Professor Bohdan Oppenheim
SELP 630: Advanced Lean Management of
Engineering Programs
Damien Lewke
29 September 2015
2. Classical Systems Engineering and Program Management’s (CSEPM) purpose at its inception was a way
of systems thinking that satisfied “all needs during an entire system lifecycle” (Oppenheim 6). However
as a result of increasing bureaucracy and a lack of requirements definitions at the outset of a Program,
CSEPM’s true purpose has never been fully realized. As a result of an inherently flawed foundation,
undefinable interfaces, the inefficiencies of subcontracting, the CSEPM “Vee” and sloppy requirements
make efficiently managing and executing an advanced engineering program extremely difficult. As
discussed in lecture, CSEPM’s problems lie at the foundation of its execution. Specifically, CSEPM is
driven by an uncompromising need for unreliable systems and inefficient government acquisition
practices and incentives. This maligned foundation historically leads to high levels of program success,
such as the 80 successful space launches by the United States Air Force, but high costs and bureaucratic
delays continue to characterize these programs. Although attempts to reform these programs have
been made by reducing the number of requirements, systems engineering continues to bury itself in
increasing requirements. Although NASA’s Faster, Better Cheaper (FBC) programs sought to decrease
requirements and costs, 16 failures in the space of one decade answered the government’s questions
about fewer requirements. Instead of understanding that requirements were not fully understood at the
beginning of the effort, the government believed that a lack of oversight caused the additional mission
failures.
The Myth of Definable Interfaces has plagued CSEPM, particularly after the introduction of Model Based
Systems Engineering (MBSE). Although MBSE has been seen as an infallible way to conduct the systems
engineering, in fact the opposite is true. MBSE functions as a clerical tool rather than a requirement. It
provides no guarantee of the inclusion of all interfaces in a system design. Moreover, human wickedness
(unpredictability) dictates how MBSE is conducted. As the human wickedness impacts the system
design, the interfaces become inherently “wicked.” It is unrealistic that all interfaces in a complex
system can be identified; therefore, it is unrealistic to assume that all interfaces can be defined. This
critically flawed assumption has lent itself to the flow-down of System Requirements and
Subcontracting. A Classical Systems Engineer must insure that “the interfaces of each element of the
system or subsystem are controlled and know down to the developers.” With major programs’ multiple
tiers of subcontractors, specifying each interface (which we already know can be unpredictable) to each
subcontractor is nearly impossible. In addition, two major obstacles to efficiency lie in this assumption:
namely that the distribution of production to as many suppliers as possible and a company’s policy to
“stick to its core competencies and subcontract out the rest” is inherently inefficient. Often,
coordination between subcontractors and the prime contractor is slow, costs run high, and the
program’s finances take a severe downturn. For instance, in his paper earlier this year, Hatch-Smith
exposed the waste associated with the extreme degrees of subcontracting that Boeing employed with
the production of the Boeing 787. By subcontracting almost all of the effort’s design work, Boeing lost
money because subcontractors at each level demanded a profit on the work they performed. This
compounds per tier of subcontractors and, as discussed in class, major programs can have up to five
tiers of subcontractors.
In the face of a complex program with thousands of requirements like these, CSEPM’s cornerstone, the
“Vee,” proves almost completely unreliable. A fundamental example of the Vee’s inability to address
complex requirements is the Myth of the Single Cycle Vee. CSEPM’s methodology is based on the
“critical assumption that knowledge exists early in the program to anticipate all system interfaces.” In
order for the Single Cycle CSEPM to work, it must be performed perfectly the first time, because in a
3. single cycle, no one can go back in time. Given what we already know, interfaces will not be correctly
defined. In addition, when considering programs whose technology has begun to far outstrip their
management, defining all requirements at the outset of the program is impossible. This is especially true
in today’s defense industry, where CP contracts continue to pile requirements onto contractors and
subcontractors. An increase in requirements will trickle down to subcontractors, at which point
contracts will need to be renegotiated, a stipulation that these subcontractors will most likely not
comply with. As a result, even if the Program Manager addresses a potential change in requirements,
the subcontractors will operate independently. This stifles system optimization. One such example of
this, the Rocketdyne and Rockwell’s conflict in the design of the Space Shuttle Orbiter and Engine Filters,
illustrates this case. Rockwell’s orbiter required the proper flow rates and pressure, Rocketdyne refused
to test the engine with hazardously sized particles, and the companies failed to reach an agreement.
This resulted in the continued operation of the space shuttle throughout the rest of its career with these
significant hazards acknowledged but never addressed.
A counterargument in defense of CSEPM would be that MBSE could be utilized to correctly define
interfaces. However, this point was disproven above, as MBSE is a clerical tool rather than a
requirement. MBSE is just as unpredictable as those who designed and operate it, and therefore cannot
be used on its own to define a design’s requirements. In addition, the firms requesting many large
programs may not ever define the requirements. Oftentimes, junior personnel on a program are forced
to determine the requirements for an advanced system. Under pressure from the program, they will
literally “copy and paste” requirements from previous unrelated programs in order to meet the current
program’s deadlines. Sloppy requirements can lead to the example discussed previously, in which a
submarine’s specifications were used on the design of a satellite. This example clearly illustrates that the
employees attempting to determine requirements, in particular junior employees under pressure
regarding time, are not always equipped to do so.
Although CSEPM’s problems make the definition of requirements difficult and the execution of quality
products costly, several possible solutions to these problems exist. The first alternative is the counter to
the Faustian Bargain discussed in class. Pending a program’s submittal to a prime contractor, the prime
contractor can sign contracts with its subcontractors, thereby guaranteeing the subcontractors business
if the prime contractor receives the program. The prime contractor will define the requirements and the
subcontractor will be mandated to follow them. If the subcontractor refuses, the prime contractor will
simply find another willing subcontractor with which to do business. When addressing the issue of
sloppy and careless requirements, a few strategies can be adopted. Firstly, a third party may conduct a
highly rigorous and independent review before requirements are performed and released for the RFP or
contract. This reviewer should be able to catch all instances of unneeded and faulty requirements and
missing interfaces as well as dispute all “gold plated” requirements and demand that all deficiencies
identified be corrected before the program can proceed further. In this situation, all deficiencies are
identified before the RFP and contract’s execution, which eliminates costly requirement changing and
program rework. In addition, instating a responsible, accountable, and authoritative person to ensure
the development and delivery of all customer-level requirements could greatly improve inefficiencies
and poor requirements on a program.
Another far simpler way of eliminating poor program requirement definition is to almost do away with
them entirely. In the 1960s, the United States put men on the moon using four simple requirements: get
the men to the moon, get them back, do it safely, and do it before the end of the decade. These four
4. requirements began one of the most impressive achievements humankind has ever accomplished. In
nine years and with a flight system with less computing power than an iPhone, Neil Armstrong landed
on the moon on July 11, 1969. This paper argues that reducing requirements on a program ensures that
it is carried out more effectively and efficiently. Simply stating a vision and leaving program managers
and engineers who are passionate about what they are pursuing will produce a better product in a
shorter time, and at a far lesser cost than current advanced engineering programs. In lean, the respect
for people is core to the success of a program. By entrusting those who know and care about what they
are designing, true program success can be achieved.