How to improve the blessings of the Earned Value Method by using an objective functional size measure like COSMIC to show the real status of a software project, as presented on the Congreso Nacional de Medición y Estimación de Software in Mexico City.
The effects of duration based moving windows with estimation by analogy - sou...IWSM Mensura
Fixed-size and fixed-duration moving windows were evaluated with estimation by analogy (EbA) effort estimation. With fixed-size windows, the modified EbA showed moving windows became significantly advantageous over the growing portfolio at medium window sizes, while trends were similar but ranges differed from past studies. With fixed-duration windows, moving windows improved accuracy less significantly than fixed-size windows over smaller window sizes. Comparisons with past studies found overall trends the same but effective window sizes and ranges differed.
1. LEDAmc is a Spanish company that provides outsourced testing services with over 100 employees and offices in Madrid and Bogota. They focus on outsourcing management and have over 70% of consultants certified in testing.
2. The document discusses metrics that should be considered when outsourcing testing to a test factory or supplier. It provides examples of testware size and effort estimations, as well as examples of enhancing testing productivity.
3. The document outlines four stages in the outsourced testing process to mitigate problems: RFP preparation, RFP adjudication, service operation, and close/renewal of service. It provides details on activities and metrics to consider at each stage.
In Information and Communication Technology (ICT) a ‘deliverable’ may be either software (perceived as an ‘output’) or a service (perceived as an ‘outcome’). On the one hand, the differences between software and service have led to the design of parallel models and lifecycles with more commonalities than differences, thereby not supporting the adoption of different frameworks. For instance, a software project could be managed applying best practices for services (e.g. ITIL), while some processes (e.g. Verification & Validation) are better defined in models of the Software Management domain. Thus, this paper aims at reconciling these differences and provides suggestions for a better joint usage of models/frameworks. To unify existing models we use the LEGO approach, which aims at keeping the element of interest from any potential model/framework for being inserted in the process architecture of the target Business Process Model (BPM) of an organization, strengthening the organizational way of working. An example of a LEGO application is presented to show the benefit from the joint view of the ‘software + service’ sides as a whole across the project lifecycle, increasing the opportunity to have many more sources for this type of improvement task.
How to use the COSMIC method for proper and reliable estimates of software projects, as presented on the Congreso Nacional de Medición y Estimación de Software in Mexico City.
Wideband Delphi is a reliable estimation techniques that is prepared based on team consensus. This presentation discusses the process and includes examples, which can be followed when preparing your own estimates.
How to improve the blessings of the Earned Value Method by using an objective functional size measure like COSMIC to show the real status of a software project, as presented on the Congreso Nacional de Medición y Estimación de Software in Mexico City.
The effects of duration based moving windows with estimation by analogy - sou...IWSM Mensura
Fixed-size and fixed-duration moving windows were evaluated with estimation by analogy (EbA) effort estimation. With fixed-size windows, the modified EbA showed moving windows became significantly advantageous over the growing portfolio at medium window sizes, while trends were similar but ranges differed from past studies. With fixed-duration windows, moving windows improved accuracy less significantly than fixed-size windows over smaller window sizes. Comparisons with past studies found overall trends the same but effective window sizes and ranges differed.
1. LEDAmc is a Spanish company that provides outsourced testing services with over 100 employees and offices in Madrid and Bogota. They focus on outsourcing management and have over 70% of consultants certified in testing.
2. The document discusses metrics that should be considered when outsourcing testing to a test factory or supplier. It provides examples of testware size and effort estimations, as well as examples of enhancing testing productivity.
3. The document outlines four stages in the outsourced testing process to mitigate problems: RFP preparation, RFP adjudication, service operation, and close/renewal of service. It provides details on activities and metrics to consider at each stage.
In Information and Communication Technology (ICT) a ‘deliverable’ may be either software (perceived as an ‘output’) or a service (perceived as an ‘outcome’). On the one hand, the differences between software and service have led to the design of parallel models and lifecycles with more commonalities than differences, thereby not supporting the adoption of different frameworks. For instance, a software project could be managed applying best practices for services (e.g. ITIL), while some processes (e.g. Verification & Validation) are better defined in models of the Software Management domain. Thus, this paper aims at reconciling these differences and provides suggestions for a better joint usage of models/frameworks. To unify existing models we use the LEGO approach, which aims at keeping the element of interest from any potential model/framework for being inserted in the process architecture of the target Business Process Model (BPM) of an organization, strengthening the organizational way of working. An example of a LEGO application is presented to show the benefit from the joint view of the ‘software + service’ sides as a whole across the project lifecycle, increasing the opportunity to have many more sources for this type of improvement task.
How to use the COSMIC method for proper and reliable estimates of software projects, as presented on the Congreso Nacional de Medición y Estimación de Software in Mexico City.
Wideband Delphi is a reliable estimation techniques that is prepared based on team consensus. This presentation discusses the process and includes examples, which can be followed when preparing your own estimates.
Practical usage of fpa and automatic code review piotr popovskiIWSM Mensura
This document discusses Orange Polska's use of function point analysis (FPA) for estimating the size, effort, and pricing of IT projects, as well as their use of automated code review. It describes how Orange Polska counts over 1 million function points annually across 800 projects in many technologies. It also explains their four-step process for converting function points to price, including custom adjustment rules. Additionally, it outlines the quality metrics and tools used for automated code review of vendors' source code.
New Results for the GEO-CAPE Observation Scheduling ProblemPhilippe Laborie
A challenging Earth-observing satellite scheduling problem was recently studied in (Frank, Do and Tran 2016) for which the best resolution approach so far on the proposed benchmark is a time-indexed Mixed Integer Linear Program (MILP) formulation. This MILP formulation produces feasible solutions but is not able to prove optimality or to provide tight optimality gaps, making it difficult to assess the quality of existing solutions. In this paper, we first introduce an alternative disjunctive MILP formulation that manages to close more than half of the instances of the benchmark. This MILP formulation is then relaxed to provide good bounds on optimal values for the unsolved instances. We then propose a CP Optimizer model that consistently outperforms the original time-indexed MILP formulation, reducing the optimality gap by more than 4 times. This Constraint Programming (CP) formulation is very concise: we give its complete OPL implementation in the presentation. Some improvements of this CP model are reported resulting in an approach that produces optimal or near-optimal solutions (optimality gap smaller than 1%) for about 80% of the instances. Unlike the MILP formulations, it is able to quickly produce good quality schedules and it is expected to be flexible enough to handle the changing requirements of the application.
Reference: Philippe Laborie and Bilal Messaoudi. New Results for the GEOCAPE Observation Scheduling Problem. Proceedings ICAPS-2017.
The project was carried out keeping in mind the daily processes related to glassware industry (LA OPALA pvt ltd.) production. The analysis was carried out by Network Scheduling using CPM (Critical Path method). This project enabled me to apply the theoretical studies in practical application. It helped me to get an complete overview of modern day scenario and how the problems are tackled and the solutions are brought into application
The document discusses staffing level estimation over the course of a software development project. It describes how the number of personnel needed varies at different stages: a small group is needed for planning and analysis, a larger group for architectural design, and the largest number for implementation and system testing. It also references models like the Rayleigh curve and Putnam's interpretation that estimate personnel levels over time. Tables show estimates for the distribution of effort, schedule, and personnel across activities for different project sizes. The key idea is that staffing requirements fluctuate throughout the software life cycle, with peaks during implementation and testing phases.
Effort estimation for software developmentSpyros Ktenas
Software effort estimation has been an important issue for almost everyone in software industry at some point. Below I will try to give some basic details on methods, best practices, common mistakes and available tools.
You may also check a tool implementing methods for estimation at http://effort-estimation.gatory.com/
Spyros Ktenas
http://open-works.org/profiles/spyros-ktenas
CP Optimizer: a generic optimization engine at the crossroad of AI and OR fo...Philippe Laborie
This document discusses CP Optimizer, a generic optimization engine developed by IBM for solving industrial scheduling problems. It describes how CP Optimizer integrates concepts from artificial intelligence such as temporal reasoning to model scheduling problems declaratively, unlike mixed integer linear programming which models time numerically and is not well-suited for scheduling. CP Optimizer uses constraint propagation and other techniques from AI and operations research within an exact algorithm to solve scheduling problems efficiently and prove optimality. It has been shown to outperform MILP on standard scheduling benchmarks and can model complex real-world scheduling problems.
Wideband Delphi is a consensus-based estimation technique involving a team that creates task estimates through an iterative process. It begins with a kickoff meeting where the team generates tasks and assumptions. Members then independently estimate effort for each task. Next, an estimation session is held where the team revises estimates through discussion to reach consensus. Finally, the project manager assembles the results into a final task list and estimate report. The technique leverages group expertise and iteration to create accurate and agreed-upon estimates.
Application of Earned Value Method and Delay Analysis on Construction Project...IRJET Journal
This document discusses applying earned value management and delay analysis techniques to analyze the cost and schedule of construction projects. It provides background on the challenges of construction projects going over budget and falling behind schedule. The document then describes earned value management parameters like planned value, earned value, and actual cost that are used to calculate cost and schedule variances. It also discusses performance indices and forecasting indicators that can be used for project monitoring and control. Finally, it presents the methodology used in this study, which involves collecting project data, setting up the project schedule in Microsoft Project, performing earned value analysis, and identifying causes of any delays. The overall goal is to measure project performance and analyze issues regarding cost and schedule.
Estimating involves forming approximate notions of amounts, numbers, or positions without actual measurement. Accurate estimates are important for project planning, budgeting, and determining viability. Estimates become more accurate over the project lifecycle as more knowledge is gained. Common types of estimates include order-of-magnitude, budget, and definitive estimates. Top-down, bottom-up, and parametric methods are commonly used estimating approaches. Estimates should involve subject matter experts, use multiple methods, document assumptions, and apply contingency allowances. Regularly reviewing and updating estimates improves accuracy.
This document provides an overview of earned value management. It defines key earned value terms like planned value, earned value, and actual cost. It explains how to calculate schedule and cost variance using these values. Variance is used to determine if a project is on budget and on schedule. The document also provides an example to illustrate these earned value concepts.
How should we estimates agile projects (CAST)Glen Alleman
“Why do so many big projects overspend and
overrun? They’re managed as if they were merely
complicated when in fact they are complex. They’re planned as if everything was known at the start when in fact they involve high levels of uncertainty and risk.” ‒ Architecting Systems: Concepts, Principles and Practice, Hillary Sillitto
ISMA 9 - van Heeringen - Using IFPUG and ISBSG to improve organization successHarold van Heeringen
Introduction to the International Software Benchmarking Standards Group and 3 cases in which function points together with ISBSG data really resulted in business value:
- Reality check of an estimate made by experts
- Assessing the competitive position of a department
- Selecting a single software supplier
The document summarizes the COCOMO model for estimating software development costs and effort. It discusses the three forms of COCOMO - basic, intermediate, and detailed. The basic model uses effort multipliers and loc to estimate effort and schedule. The intermediate model adds 15 cost drivers. The detailed model further adds a three-level product hierarchy and phase-sensitive effort multipliers to provide more granular estimates. Examples are provided to illustrate effort and schedule estimates for different project modes and sizes using the basic and intermediate COCOMO models.
This document discusses software project management and estimation techniques. It covers:
- Project management involves planning, monitoring, and controlling people and processes.
- Estimation approaches include decomposition techniques and empirical models like COCOMO I & II.
- COCOMO I & II models estimate effort based on source lines of code and cost drivers. They include basic, intermediate, and detailed models.
- Other estimation techniques discussed include function point analysis and problem-based estimation.
Application Migration: How to Start, Scale and SucceedVMware Tanzu
Undergoing the application migration journey can be cumbersome and challenging, especially when you have a complex application portfolio that consists of both legacy and newer apps on outdated systems. You are hindered by managing and operating manual processes to address security concerns, regulatory change and policy compliance.
You know embarking on the cloud journey is inevitable and deciding where to start is overwhelming. Let us show you how.
Join Matt Russell to hear how Pivotal helps large organizations plan and execute their application transformation initiatives by using a set of proven techniques and approaches that help you get started quickly and scale continuously.
We use simple tools and start small to redefine current systems, and achieve cloud-native speed and resiliency. Let us show you how Pivotal can help you navigate your journey while instilling confidence along the way.
Presenter : Matt Russell, Senior Director, Application Transformation at Pivotal
A Sogeti study to which extent it\'s possible to convert function points to COSMIC function points and back. A framework on how to make the transition from FPA to COSMIC as the leading FSMM in the organzition. - Published at the SMEF2007 conference (Rome, May 2007)
Effort estimation is the process of predicting the most realistic amount of effort (expressed in terms of person-hours or money) required to develop or maintain software based on incomplete, uncertain and noisy input.
Effort estimation is essential for many people and different departments in an organization.
The document discusses techniques for estimating project metrics like effort, cost, and duration for software projects. It describes the COCOMO model which categorizes projects as organic, semidetached, or embedded based on characteristics. The basic COCOMO model estimates effort and time using project size, while the intermediate model refines estimates using cost drivers. The complete COCOMO model treats a project as multiple components. Halstead's metrics also estimate effort using program operands and operators. The document provides an example of estimating metrics for a library information system project using COCOMO.
The document discusses the benefits of software process improvement (SPI) and achieving higher maturity levels like CMMI Level 5. It provides examples of organizations that saw significant reductions in defects, costs and improvements in productivity after implementing SPI initiatives and achieving higher maturity levels. While SPI requires initial investments, it more than pays for itself through reductions in rework costs and improvements in productivity.
Practical usage of fpa and automatic code review piotr popovskiIWSM Mensura
This document discusses Orange Polska's use of function point analysis (FPA) for estimating the size, effort, and pricing of IT projects, as well as their use of automated code review. It describes how Orange Polska counts over 1 million function points annually across 800 projects in many technologies. It also explains their four-step process for converting function points to price, including custom adjustment rules. Additionally, it outlines the quality metrics and tools used for automated code review of vendors' source code.
New Results for the GEO-CAPE Observation Scheduling ProblemPhilippe Laborie
A challenging Earth-observing satellite scheduling problem was recently studied in (Frank, Do and Tran 2016) for which the best resolution approach so far on the proposed benchmark is a time-indexed Mixed Integer Linear Program (MILP) formulation. This MILP formulation produces feasible solutions but is not able to prove optimality or to provide tight optimality gaps, making it difficult to assess the quality of existing solutions. In this paper, we first introduce an alternative disjunctive MILP formulation that manages to close more than half of the instances of the benchmark. This MILP formulation is then relaxed to provide good bounds on optimal values for the unsolved instances. We then propose a CP Optimizer model that consistently outperforms the original time-indexed MILP formulation, reducing the optimality gap by more than 4 times. This Constraint Programming (CP) formulation is very concise: we give its complete OPL implementation in the presentation. Some improvements of this CP model are reported resulting in an approach that produces optimal or near-optimal solutions (optimality gap smaller than 1%) for about 80% of the instances. Unlike the MILP formulations, it is able to quickly produce good quality schedules and it is expected to be flexible enough to handle the changing requirements of the application.
Reference: Philippe Laborie and Bilal Messaoudi. New Results for the GEOCAPE Observation Scheduling Problem. Proceedings ICAPS-2017.
The project was carried out keeping in mind the daily processes related to glassware industry (LA OPALA pvt ltd.) production. The analysis was carried out by Network Scheduling using CPM (Critical Path method). This project enabled me to apply the theoretical studies in practical application. It helped me to get an complete overview of modern day scenario and how the problems are tackled and the solutions are brought into application
The document discusses staffing level estimation over the course of a software development project. It describes how the number of personnel needed varies at different stages: a small group is needed for planning and analysis, a larger group for architectural design, and the largest number for implementation and system testing. It also references models like the Rayleigh curve and Putnam's interpretation that estimate personnel levels over time. Tables show estimates for the distribution of effort, schedule, and personnel across activities for different project sizes. The key idea is that staffing requirements fluctuate throughout the software life cycle, with peaks during implementation and testing phases.
Effort estimation for software developmentSpyros Ktenas
Software effort estimation has been an important issue for almost everyone in software industry at some point. Below I will try to give some basic details on methods, best practices, common mistakes and available tools.
You may also check a tool implementing methods for estimation at http://effort-estimation.gatory.com/
Spyros Ktenas
http://open-works.org/profiles/spyros-ktenas
CP Optimizer: a generic optimization engine at the crossroad of AI and OR fo...Philippe Laborie
This document discusses CP Optimizer, a generic optimization engine developed by IBM for solving industrial scheduling problems. It describes how CP Optimizer integrates concepts from artificial intelligence such as temporal reasoning to model scheduling problems declaratively, unlike mixed integer linear programming which models time numerically and is not well-suited for scheduling. CP Optimizer uses constraint propagation and other techniques from AI and operations research within an exact algorithm to solve scheduling problems efficiently and prove optimality. It has been shown to outperform MILP on standard scheduling benchmarks and can model complex real-world scheduling problems.
Wideband Delphi is a consensus-based estimation technique involving a team that creates task estimates through an iterative process. It begins with a kickoff meeting where the team generates tasks and assumptions. Members then independently estimate effort for each task. Next, an estimation session is held where the team revises estimates through discussion to reach consensus. Finally, the project manager assembles the results into a final task list and estimate report. The technique leverages group expertise and iteration to create accurate and agreed-upon estimates.
Application of Earned Value Method and Delay Analysis on Construction Project...IRJET Journal
This document discusses applying earned value management and delay analysis techniques to analyze the cost and schedule of construction projects. It provides background on the challenges of construction projects going over budget and falling behind schedule. The document then describes earned value management parameters like planned value, earned value, and actual cost that are used to calculate cost and schedule variances. It also discusses performance indices and forecasting indicators that can be used for project monitoring and control. Finally, it presents the methodology used in this study, which involves collecting project data, setting up the project schedule in Microsoft Project, performing earned value analysis, and identifying causes of any delays. The overall goal is to measure project performance and analyze issues regarding cost and schedule.
Estimating involves forming approximate notions of amounts, numbers, or positions without actual measurement. Accurate estimates are important for project planning, budgeting, and determining viability. Estimates become more accurate over the project lifecycle as more knowledge is gained. Common types of estimates include order-of-magnitude, budget, and definitive estimates. Top-down, bottom-up, and parametric methods are commonly used estimating approaches. Estimates should involve subject matter experts, use multiple methods, document assumptions, and apply contingency allowances. Regularly reviewing and updating estimates improves accuracy.
This document provides an overview of earned value management. It defines key earned value terms like planned value, earned value, and actual cost. It explains how to calculate schedule and cost variance using these values. Variance is used to determine if a project is on budget and on schedule. The document also provides an example to illustrate these earned value concepts.
How should we estimates agile projects (CAST)Glen Alleman
“Why do so many big projects overspend and
overrun? They’re managed as if they were merely
complicated when in fact they are complex. They’re planned as if everything was known at the start when in fact they involve high levels of uncertainty and risk.” ‒ Architecting Systems: Concepts, Principles and Practice, Hillary Sillitto
ISMA 9 - van Heeringen - Using IFPUG and ISBSG to improve organization successHarold van Heeringen
Introduction to the International Software Benchmarking Standards Group and 3 cases in which function points together with ISBSG data really resulted in business value:
- Reality check of an estimate made by experts
- Assessing the competitive position of a department
- Selecting a single software supplier
The document summarizes the COCOMO model for estimating software development costs and effort. It discusses the three forms of COCOMO - basic, intermediate, and detailed. The basic model uses effort multipliers and loc to estimate effort and schedule. The intermediate model adds 15 cost drivers. The detailed model further adds a three-level product hierarchy and phase-sensitive effort multipliers to provide more granular estimates. Examples are provided to illustrate effort and schedule estimates for different project modes and sizes using the basic and intermediate COCOMO models.
This document discusses software project management and estimation techniques. It covers:
- Project management involves planning, monitoring, and controlling people and processes.
- Estimation approaches include decomposition techniques and empirical models like COCOMO I & II.
- COCOMO I & II models estimate effort based on source lines of code and cost drivers. They include basic, intermediate, and detailed models.
- Other estimation techniques discussed include function point analysis and problem-based estimation.
Application Migration: How to Start, Scale and SucceedVMware Tanzu
Undergoing the application migration journey can be cumbersome and challenging, especially when you have a complex application portfolio that consists of both legacy and newer apps on outdated systems. You are hindered by managing and operating manual processes to address security concerns, regulatory change and policy compliance.
You know embarking on the cloud journey is inevitable and deciding where to start is overwhelming. Let us show you how.
Join Matt Russell to hear how Pivotal helps large organizations plan and execute their application transformation initiatives by using a set of proven techniques and approaches that help you get started quickly and scale continuously.
We use simple tools and start small to redefine current systems, and achieve cloud-native speed and resiliency. Let us show you how Pivotal can help you navigate your journey while instilling confidence along the way.
Presenter : Matt Russell, Senior Director, Application Transformation at Pivotal
A Sogeti study to which extent it\'s possible to convert function points to COSMIC function points and back. A framework on how to make the transition from FPA to COSMIC as the leading FSMM in the organzition. - Published at the SMEF2007 conference (Rome, May 2007)
Effort estimation is the process of predicting the most realistic amount of effort (expressed in terms of person-hours or money) required to develop or maintain software based on incomplete, uncertain and noisy input.
Effort estimation is essential for many people and different departments in an organization.
The document discusses techniques for estimating project metrics like effort, cost, and duration for software projects. It describes the COCOMO model which categorizes projects as organic, semidetached, or embedded based on characteristics. The basic COCOMO model estimates effort and time using project size, while the intermediate model refines estimates using cost drivers. The complete COCOMO model treats a project as multiple components. Halstead's metrics also estimate effort using program operands and operators. The document provides an example of estimating metrics for a library information system project using COCOMO.
The document discusses the benefits of software process improvement (SPI) and achieving higher maturity levels like CMMI Level 5. It provides examples of organizations that saw significant reductions in defects, costs and improvements in productivity after implementing SPI initiatives and achieving higher maturity levels. While SPI requires initial investments, it more than pays for itself through reductions in rework costs and improvements in productivity.
Enhancing the Software Effort Prediction Accuracy using Reduced Number of Cos...IRJET Journal
This document presents research on modifying the COCOMO II software cost estimation model to improve prediction accuracy. The researchers reduced the number of cost estimation factors (called cost drivers) from 17 to 13 by adjusting the definitions and impact levels to better reflect current industry situations. They estimated effort for software projects using the modified model and found lower percentage errors compared to the original COCOMO II model, demonstrating improved estimation efficiency. The goal of the research was to analyze cost drivers and their impact on effort estimation in COCOMO II and enhance the model for more accurate predictions.
The document provides information on cost estimation techniques for software projects. It discusses how complexity, size, efforts, and time relate to each other in cost models. Size is typically measured in thousands of lines of code (KSLOC). Efforts are estimated by multiplying KSLOC by a productivity factor. For larger projects, a size penalty factor is included. Function point analysis is an alternative to estimating directly from KSLOC by evaluating inputs, outputs, interfaces, and files.
This document discusses various software metrics that can be used for software estimation, quality assurance, and maintenance. It describes black box metrics like function points and COCOMO, which focus on program functionality without examining internal structure. It also covers white box metrics, including lines of code, Halstead's software science, and McCabe's cyclomatic complexity, which measure internal program properties. Finally, it discusses using metrics like change rates and effort adjustment factors to estimate software maintenance costs.
The document discusses project scheduling and tracking in software engineering. It provides reasons why projects may be late, such as unrealistic deadlines or changing requirements. It discusses principles for effective scheduling like compartmentalization of tasks and defining responsibilities. Metrics like earned value analysis are presented to quantitatively track project progress versus what was planned. Risk management techniques like proactive risk analysis are outlined to improve project success.
This document proposes a new approach for software project estimation that combines existing estimation techniques. It involves using case-based reasoning to retrieve similar past projects, reusing their estimates, and revising the estimates based on new parameters and delay-causing incidents. The approach allows parameters to be added dynamically during project execution to make estimates more context-sensitive and help converge to actual values. A prototype tool has been implemented to demonstrate calculating estimates by dynamically selecting parameters and computing similarity indexes between current and past projects.
Metrics for Mofel-Based Systems DevelopmentBruce Douglass
This presentation describes the value of metrics, key concepts for effective use of metrics, and provides some common metrics for project management, model-based design, and quality assurance. Created by Dr. Bruce Powel Douglass, Ph.D.
1. Which of the following is INCORRECT regarding the process capab.docxjackiewalcutt
1. Which of the following is INCORRECT regarding the process capability index Cpk
Productivity can be improved by
Increasing inputs while holding outputs steady
Decreasing outputs while holding inputs steady
Increasing inputs and outputs in the same proportion
Increasing outputs while holding inputs steady
2. Productivity can be improved by
Increasing inputs while holding outputs steady
Decreasing outputs while holding inputs steady
Increasing inputs and outputs in the same proportion
Increasing outputs while holding inputs steady
3. Which of the following statements is INCORRECT regarding critical paths?
The path that takes the longest time to complete in a project is the critical path.
Activities on the critical path must have zero slack time.
Some non-critical activities may have zero slack time.
For any project, the (expected) project completion time is equal to the (expected) time duration of the project’s critical path.
4. Suppose a project team has arrived at the following time estimates for an activity: a = 4 days, m = 6 days, and b = 8 days. What is the variance involved in this activity?
0.111
0.250
0.444
0.694
5. Suppose you are asked to determine the Lower Control Limit for a p-chart for quality control purposes. Samples are taken from the production line. The fraction defective is 0.008 and the standard deviation is 0.002 based on the samples. Set z = 3. Which of the following is the LCL of the p-chart?
0.001
0.002
0.006
0.013
6. The least squares method is to find out the intercept and the slope of a regression line that minimizes the sum of the squared differences between
observed values of the independent variable and predicted values of the independent variable
observed values of the independent variable and predicted values of the dependent variable
observed values of the dependent variable and predicted values of the dependent variable
None of the three is correct.
7. Which of the following statements is INCORRECT regarding corporate missions?
They reflect a company's purpose.
They indicate what a company intends to contribute to society.
They are formulated after strategies are known.
They define a company's reason for existence.
8. Given forecast errors of -2, 5, 10, and -3, what is the mean absolute deviation (MAD)?
2.5
3
4
5
9. Which of the following best describes the process focus strategy?
Appropriate for high-volume, low-variety production
Equipment or processes are arranged based on the progressive steps by which a product is made.
Also known as flow shop
Appropriate for low-volume, high-variety production
10. According to the definition of design quality,
Quality is the degree of excellence at an acceptable price
Quality depends on how well the product fits consumer preferences
Even though quality cannot be defined, you know what it is
Quality is the degree to which a specific product conforms to design specifications
11. Which of the f ...
Overview of Software Development Life Cycle Models. Why traditional parametric estimating tools do not help estimate a software project developed using the Agile model. Explain and demonstrate the “nearest neighbor” analogy technique to estimate Agile software projects. Data and actions needed to implement the nearest neighbor technique
This document discusses using use case points (UCP) to estimate software development effort. UCP involves classifying use cases and actors based on complexity, then calculating unadjusted use case and actor weights. Technical and environmental factors are also assessed. These variables are used in an equation to determine the adjusted use case points and estimated effort in hours or weeks. The document presents this method and tools to automate it. It also compares UCP to function points and shares results from applying UCP in three industry projects, finding the estimates were close to expert assessments.
Codeless Generative AI Pipelines
(GenAI with Milvus)
https://ml.dssconf.pl/user.html#!/lecture/DSSML24-041a/rate
Discover the potential of real-time streaming in the context of GenAI as we delve into the intricacies of Apache NiFi and its capabilities. Learn how this tool can significantly simplify the data engineering workflow for GenAI applications, allowing you to focus on the creative aspects rather than the technical complexities. I will guide you through practical examples and use cases, showing the impact of automation on prompt building. From data ingestion to transformation and delivery, witness how Apache NiFi streamlines the entire pipeline, ensuring a smooth and hassle-free experience.
Timothy Spann
https://www.youtube.com/@FLaNK-Stack
https://medium.com/@tspann
https://www.datainmotion.dev/
milvus, unstructured data, vector database, zilliz, cloud, vectors, python, deep learning, generative ai, genai, nifi, kafka, flink, streaming, iot, edge
4th Modern Marketing Reckoner by MMA Global India & Group M: 60+ experts on W...Social Samosa
The Modern Marketing Reckoner (MMR) is a comprehensive resource packed with POVs from 60+ industry leaders on how AI is transforming the 4 key pillars of marketing – product, place, price and promotions.
Global Situational Awareness of A.I. and where its headedvikram sood
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be un-leashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the wilful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
The Ipsos - AI - Monitor 2024 Report.pdfSocial Samosa
According to Ipsos AI Monitor's 2024 report, 65% Indians said that products and services using AI have profoundly changed their daily life in the past 3-5 years.
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
Open Source Contributions to Postgres: The Basics POSETTE 2024ElizabethGarrettChri
Postgres is the most advanced open-source database in the world and it's supported by a community, not a single company. So how does this work? How does code actually get into Postgres? I recently had a patch submitted and committed and I want to share what I learned in that process. I’ll give you an overview of Postgres versions and how the underlying project codebase functions. I’ll also show you the process for submitting a patch and getting that tested and committed.
Beyond the Basics of A/B Tests: Highly Innovative Experimentation Tactics You...Aggregage
This webinar will explore cutting-edge, less familiar but powerful experimentation methodologies which address well-known limitations of standard A/B Testing. Designed for data and product leaders, this session aims to inspire the embrace of innovative approaches and provide insights into the frontiers of experimentation!
End-to-end pipeline agility - Berlin Buzzwords 2024Lars Albertsson
We describe how we achieve high change agility in data engineering by eliminating the fear of breaking downstream data pipelines through end-to-end pipeline testing, and by using schema metaprogramming to safely eliminate boilerplate involved in changes that affect whole pipelines.
A quick poll on agility in changing pipelines from end to end indicated a huge span in capabilities. For the question "How long time does it take for all downstream pipelines to be adapted to an upstream change," the median response was 6 months, but some respondents could do it in less than a day. When quantitative data engineering differences between the best and worst are measured, the span is often 100x-1000x, sometimes even more.
A long time ago, we suffered at Spotify from fear of changing pipelines due to not knowing what the impact might be downstream. We made plans for a technical solution to test pipelines end-to-end to mitigate that fear, but the effort failed for cultural reasons. We eventually solved this challenge, but in a different context. In this presentation we will describe how we test full pipelines effectively by manipulating workflow orchestration, which enables us to make changes in pipelines without fear of breaking downstream.
Making schema changes that affect many jobs also involves a lot of toil and boilerplate. Using schema-on-read mitigates some of it, but has drawbacks since it makes it more difficult to detect errors early. We will describe how we have rejected this tradeoff by applying schema metaprogramming, eliminating boilerplate but keeping the protection of static typing, thereby further improving agility to quickly modify data pipelines without fear.
3. The COSMIC method is now frozen at v4.0.2
3
The method is now mature and stable.
Measurement Practices Committee priorities:
align Guidelines, Case Studies, Certification exams, etc.,
with v4.0.2
improve ‘accessibility’ of the method documentation
Other activities:
develop training modules that can e.g. be shown on Youtube
improve ‘cosmic-sizing’ website for accessibility and pro-
active use
4. There is great interest in COSMIC size
measurement automation
4
100+ research papers now published.
Automated CFP sizing from:
Text using natural language processing, AI, etc. (emerging)
Designs (in production)
In UML (Poland, government contracts)
In Matlab Statemate (Renault et al)
Code (demonstrated to good accuracy)
Static and dynamic analysis of Java code
Motivations: measurement speed & accuracy; requirements
quality control; cost estimation
5. There is great interest in COSMIC in China
5
Much COSMIC documentation now translated
into Chinese
IWSM 2018 in Beijing devoted to COSMIC
SIG established by Tencent QQ, has 200
members
6+ workshops on COSMIC/year
Data on 5 projects to ISBSG
Work to start on COSMIC size automation
6. The big challenge: gaining acceptance in the
Agile community
6
Agilists don’t like having their performance
measured and compared
‘Velocity’ measures do not measure velocity
The ‘No Estimate’ movement
Yet growing interest in ‘Agile-at-scale’ is a
sign of accepting real-world constraints
7. COSMIC, IFPUG and Nesma have collaborated
to raise awareness of FSM amongst Agilists
7
We prefer Facts to Stories
(Managing Agile activities using standardised measures)
I F P U G
May 2018
Contents:
• Benefits of Agile processes
• So what’s wrong with Agile measurement?
• The solution: use standard software functional
sizing methods to manage ‘Agile-at-scale’
• Outline description of FSM methods
• When to use FSM methods in Agile
• Estimating in Agile environments
• Software sizing in outsourcing contracts
• Introducing a standard software size
measurement method
• Summary: Functional size measures versus Story
Points
9. Background
Organizations experienced in Agile methods are
starting to realise the limitations of Story Points
Very difficult to compare performance and track
progress across Teams
No help for early effort estimation, or for
organizational learning
Reports are now coming in on the use of COSMIC
Function Points instead of Story Points
9
10. Case: Canadian supplier of security and
surveillance software systems
A customer request for new or changed function is called a ‘task’
Uses Scrum method; iterations last 3 – 6 weeks
Teams estimate tasks within each iteration in Story Points, and
convert directly to effort in work-hours (this is not considered good
Agile practice)
Study* involved measurements on 24 tasks in nine iterations
Each task estimated in SP, converted to effort
Actual effort recorded
Each task also measured in CFP
‘Effort Estimation with Story Points and COSMIC Function Points - An Industry Case Study’, Christophe Commeyne, Alain
Abran, Rachida Djouab. ‘Software Measurement News’. Vol 21, No. 1, 2016. Obtainable from www.cosmic-sizing.org
11. A best-fit straight line would be a poor
predictor of effort from SP sizes
0
20
40
60
80
100
120
140
160
180
200
0 20 40 60 80 100 120 140 160 180 200
ActualEffort(hours)
Estimated Effort (Hours)
Effort = 0.47 x Story Points + 17.6 hours and R2 = 0.33)
Notice the wide spread and the 17.6 hours ‘overhead’ for zero CFP
12. The Effort vs CFP size graph (24 tasks)
shows a good fit, but there are two outliers
0
20
40
60
80
100
120
140
160
180
200
0 10 20 30 40 50 60 70 80
ActualEffort(Hours)
Functional Size in CFP
Effort = 1.84 x CFP + 6.11 hours and R2 = 0.782
The two projects with low effort/CFP were found to involve significant software re-use,
so were rejected as outliers
13. Now we have a good effort vs CFP correlation
(22 tasks), usable for predicting effort
0
20
40
60
80
100
120
140
160
180
200
0 10 20 30 40 50 60 70 80
ActualEffor(Hours)
Functional Size in CFP
Y = 2.35 x CFP - 0.08hrs and R2 = 0.977)
14. Large Turkish supplier of security software*
14
Agile/SCRUM method
Web portal project for one Team (6 developers, 2 testers)
Ten 3-week Sprints analysed
Planning meeting for each Sprint estimates Story Points
and allocates Stories to Sprints
CFP sizes measured retrospectively from ‘mature’
documentation in JIRA
Measurement effort averaged 4.1 hours/Sprint (= 25
CFP/hour)
* ‘Effort estimation for Agile software development. Comparative case studies using COSMIC Function Points and Story
Points’. Murat Salmanoglu, Tuna Hacaloglu, Onur Demirors. Ankara, Turkey. IWSM/Mensura Conference, Gothenburg 2017
15. Completed CFP correlate much better with
Actual Effort than do Story Points
15
y = 3.9322x + 345.31
R² = 0.6648
0
200
400
600
800
1000
1200
1400
0 50 100 150 200
Effort(Work-hours)
Story Points
Case 1: SP vs Actual Effort
y = 9.0669x - 37.783
R² = 0.8576
0
200
400
600
800
1000
1200
1400
0 20 40 60 80 100 120 140
Effort(Work-hours)
COSMIC Function Points
Case 1: CFP vs Actual Effort
CFP vs Actual effort has much better R2 and much lower intercept for CFP = 0
16. Large Turkish software organization mainly
supplying the telecoms industry*
16
500 Developers using Agile approaches
Study of 10 Change Request ‘Projects’ for one
specific development team
Story Points estimated by experts and converted
directly to ‘Predicted effort’
CFP sizes measured retrospectively from the same
‘not mature’ CR documents + other information
Measurement effort averaged 1 day/project (~ 9
CFP/WH)
17. Completed CFP correlate better with Actual
Effort than does Predicted Effort (≡ SP)
17
y = 1.0414x + 50.031
R² = 0.9093
0
200
400
600
800
1000
1200
1400
0 200 400 600 800 1000 1200 1400
ActualEffort(WH)
Predicted Effort (WH)
Case 2: Predicted vs Actual Effort
y = 5.968x + 3.8385
R² = 0.9528
0
200
400
600
800
1000
1200
1400
1600
0 50 100 150 200 250
ActualEffort
COSMIC Function Points
Case 2: CFP vs Actual Effort
CFP vs Actual effort has better R2 and much lower intercept for CFP = 0
18. Large Turkish software developer, supplying
mainly to finance and banking industry*
18
SCRUM methodology
Requirements documentation ‘lacking’
Story Points are directly converted to estimated
effort, but no predicted effort data was available
CFP measured retrospectively
Results shown here are for 6 projects that used the
same C# technology
19. Completed CFP correlate much better with
Actual Effort than do Story Points
19
y = 5.6693x + 100.75
R² = 0.5647
0
100
200
300
400
500
600
0 10 20 30 40 50 60 70
ActuaolEffort(WH)
Story Points
Case 3.1: SP vs Actual Effort
y = 2.3693x - 34.877
R² = 0.9264
0
100
200
300
400
500
600
0 50 100 150 200 250
ActualEffort(WH)
COSMIC Function Points
Case 3.1: CFP vs Actual Effort
CFP vs Actual effort has much better R2 and much better intercept for CFP = 0
20. A User view of ‘COSMIC for Agile’
“We have found that adopting this approach provides us
with excellent predictability and comparability across
projects, teams, time and technologies.”
The reality of achieving predictable project performance has
driven me to investigate many methods of prediction.
COSMIC is the method that lets me sleep at night.”
Denis Krizanovic, Aon Australia, August 2014
Copyright: COSMIC 2017
20
21. Conclusion: CFP sizes correlate very well with
effort – much better than Story Points
21
Correlations of post-calculated CFP sizes with actual
effort versus SP/effort correlations:
higher R-squared (better)
intercepts for zero CFP much closer to zero effort
(more realistic)
See the original papers for other interesting results
22. The productivity figures of the four datasets vary
significantly; they should not be compared
22
‘Product Delivery Rate’ figures of the four datasets vary from 2.35 to 9.1
work-hours/CFP.
The following factors almost certainly influence performance:
Different levels of decomposition of the software
Different activities included in effort figures
Different work mixes (new requirements, change requests)
Varying requirements documentation quality (and no measures of
product quality)
Varying amounts of functional or code re-use
Different application domains, technologies, work practices
(Maybe) different skill-levels, hence different real productivity
23. Conclusion: CFP can beneficially replace SP,
with no other changes to Agile practices
In practice, a subjective measure
of relative effort
Meaningful only within a project
team
Poor for estimating total project
effort
No guidance on how to deal with
Non-Functional Requts.
An objective, ISO Standard
measure of functional size
Sizes meaningful across projects
and teams
Good for estimating at all levels
(US, Sprint, Release, System)
Method advises how to deal with
NFR
Story Points COSMIC Function Points
Copyright: COSMIC 2017
23
Getting Agile teams to accept measurement is the biggest challenge