With increasing pressure to improve quality while cutting costs, process improvement is a top priority for many organizations right now; but once we've implemented a process improvement initiative, how do we accurately measure the benefits? Benchmarking is critical to determining the success of any serious process improvement program. As with any type of measurement program, it requires an initial reference point to measure progress. To set our point of comparison, we first need to perform a benchmark on a contemporary sample of projects that are representative of the typical work that we do. In this webinar, industry expert Larry Putnam, Jr. will take you through the necessary steps to perform a successful benchmark - from collecting quantitative and qualitative data to establish the initial baseline benchmark all the way through to performing follow up benchmarks on new projects and process improvement analysis.
Benchmarking is an ongoing process involving industries from all walks of life and all categories of production
The principle is that no company is 100% perfect, and if you continuously search for better solutions, you will improve your efficiency and become an exceptional company, which can later form a benchmark for similar companies.
Benchmarking is a legal activity. Because benchmarking has been applied in a formal fashion (following strict rules) to all manner of technical and administrative procedures. There is legal authorities on information exchanges ( major area of antitrust concern applicable to benchmarking )
Benchmarking is an ongoing process involving industries from all walks of life and all categories of production
The principle is that no company is 100% perfect, and if you continuously search for better solutions, you will improve your efficiency and become an exceptional company, which can later form a benchmark for similar companies.
Benchmarking is a legal activity. Because benchmarking has been applied in a formal fashion (following strict rules) to all manner of technical and administrative procedures. There is legal authorities on information exchanges ( major area of antitrust concern applicable to benchmarking )
Benchmarking is the process of improving performance by continuously identifying, understanding, and adapting outstanding practices found inside and outside the organization.
,
benchmarking
,
benchmarking concept
,
reasons to benchmark
,
6 general steps to benchmarking
,
deciding what to benchmark
,
studying others
,
learning from the data
,
using the findings
,
pitfalls & criticisms
,
questions for discussion
[Note: This is a partial preview. To download this presentation, visit:
https://www.oeconsulting.com.sg/training-presentations]
Benchmarking is the process of continually searching for the best methods, practices and processes, and either adopting or adapting their good features and implementing them to become the “best of the best.” To become the best-in-class, organizations need to implement the right process to get there.
Based on the world renowned Xerox Benchmarking Process model pioneered by Robert C. Camp, this presentation deck covers the benefits of benchmarking, various types of benchmarking, identifying what to benchmark, and provides a detailed step by step guidance on how to systematically carry out a benchmarking project. It also includes useful tips on benchmarking, benchmarking etiquettes and the critical success factors.
LEARNING OBJECTIVES
1. Gain a broad understanding of the key concepts of benchmarking.
2. Learn how to identify, assess and implement various types of benchmarking projects to meet the your organization’s goals based on the Xerox Benchmarking model.
3. Gain awareness of the code of conduct for benchmarking and make preparations to get the most out of a site visit.
4. Define the critical success factors in benchmarking implementation.
5. Kick-start benchmarking projects that are aligned to your company’s strategic goals.
CONTENTS
Introduction to Benchmarking
The Xerox Benchmarking Process
Step 1: What to benchmark?
Step 2: Whom to benchmark?
Step 3: Data collection
Step 4: Determine current performance “gap”
Step 5: Project future performance levels
Step 6: Communicate findings and gain acceptance
Step 7: Establish goals
Step 8: Develop action plans
Step 9: Implement actions and monitor progress
Step 10: Re-calibrate benchmarks
Benchmarking Roles and Responsibilities
Benchmarking Inspection Checklist (Toll-gate Review)
Benchmarking Etiquette
Benchmarking Site Visit
Benchmarking Pitfalls & Success
To download this complete presentation, please visit: http://www.oeconsulting.com.sg
Benchmarking is the process of improving performance by continuously identifying, understanding, and adapting outstanding practices found inside and outside the organization.
,
benchmarking
,
benchmarking concept
,
reasons to benchmark
,
6 general steps to benchmarking
,
deciding what to benchmark
,
studying others
,
learning from the data
,
using the findings
,
pitfalls & criticisms
,
questions for discussion
[Note: This is a partial preview. To download this presentation, visit:
https://www.oeconsulting.com.sg/training-presentations]
Benchmarking is the process of continually searching for the best methods, practices and processes, and either adopting or adapting their good features and implementing them to become the “best of the best.” To become the best-in-class, organizations need to implement the right process to get there.
Based on the world renowned Xerox Benchmarking Process model pioneered by Robert C. Camp, this presentation deck covers the benefits of benchmarking, various types of benchmarking, identifying what to benchmark, and provides a detailed step by step guidance on how to systematically carry out a benchmarking project. It also includes useful tips on benchmarking, benchmarking etiquettes and the critical success factors.
LEARNING OBJECTIVES
1. Gain a broad understanding of the key concepts of benchmarking.
2. Learn how to identify, assess and implement various types of benchmarking projects to meet the your organization’s goals based on the Xerox Benchmarking model.
3. Gain awareness of the code of conduct for benchmarking and make preparations to get the most out of a site visit.
4. Define the critical success factors in benchmarking implementation.
5. Kick-start benchmarking projects that are aligned to your company’s strategic goals.
CONTENTS
Introduction to Benchmarking
The Xerox Benchmarking Process
Step 1: What to benchmark?
Step 2: Whom to benchmark?
Step 3: Data collection
Step 4: Determine current performance “gap”
Step 5: Project future performance levels
Step 6: Communicate findings and gain acceptance
Step 7: Establish goals
Step 8: Develop action plans
Step 9: Implement actions and monitor progress
Step 10: Re-calibrate benchmarks
Benchmarking Roles and Responsibilities
Benchmarking Inspection Checklist (Toll-gate Review)
Benchmarking Etiquette
Benchmarking Site Visit
Benchmarking Pitfalls & Success
To download this complete presentation, please visit: http://www.oeconsulting.com.sg
An easy step by step guide to establish implementable & actionable LSM ( Local store Marketing ) in retail stores, Proven easy & useful, used & copied & adopted by leading fast food chains & retail company's in the Philippines. If you need a copy of the Power point presentation email your request @ dngrtz2000@hotmail.com, will send you your copy immediately.
B2B Whiteboard has teamed up with Marketing Revolution to bring you the 10 Key Benefits of Local Marketing.
For more free business resources, check out www.b2bwhiteboard.com
Like, share or leave a comment below if you found this presentation useful.
Leader as Coach: from GROW Coaching to FLOW CoachingTim Coburn
A one page comparison of GROW Coaching and FLOW Coaching. The advantages of FLOW Coaching show how it could give leaders are more effective coaching tool than they currently use.
This is short review of project matrices. This short lecture provides an overview that how software project matrices help software project manager to make accurate estimates.
Metrics serve as important indicator of the efficiency and effectiveness of software process. Analysis of defined metrics helps identify area of improvement and devise subsequent actions.......Read more
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
2. (#2)
• Benchmarking Overview
• Data Collection
• What to Measure
• Dataset Demographics?
• Organizational Baselining
• Internal Benchmark
• Outlier Analysis
• Opportunities for Process Improvements
• Business Case for Productivity & Quality Improvement
• External Benchmarking
• Selecting Appropriate External Datasets
• Benchmark Comparisons
Agenda
3. (#3)
Benchmarking is the process of comparing one's
business processes and performance metrics
to industry bests and/or best practices from
other industries. Dimensions typically measured are
quality, time, and cost. Improvements from
learning mean doing things better, faster, and
cheaper.
en.wikipedia.org/wiki/Benchmarking
What is Benchmarking?
4. (#4)
Typical Motivations:
• Assess internal and external competitive position
• To support a process assessment (CMMI, ISO, etc.)
• Assess capability of and negotiate with suppliers
• Provide baseline to measure improvements
• Quantify the benefits of improvement
• Support realistic estimation processes
Solutions:
• QSM Benchmarking Services
High Performance Benchmark Service
SLIM-Metrics Product
Performance Benchmark Tables -
http://www.qsm.com/resources/performance-benchmark-tables
Why Do Organizations Benchmark?
5. (#5)
• Size and Scope
Abstraction of measurements change depending on where you are in the
lifecycle
Tells you the overall size of system
Tells how much of the product is completed at different points in time
• Schedule
Measured in calendar time like days, weeks, months or years
Overall duration of the project
Tells how much of the overall schedule has been consumed and identifies
key task/milestone completion dates
• Effort
Measured in labor units like person days, person weeks, person months
or person years
Proportional to the overall cost
Tells how labor/money is being spent
• Defects
Deviations from specification usually measured by severity level
Tells you how many defects you are likely to find and when
Tells how stable the product is at different points in time and helps
project when the product will be reliable enough deploy
Best Practices Core Quantitative Metrics
6. (#6)
• Design project
Typically have a “what, how, do” lifecycle
Involve human problem solving and iterative rework
cycles
• Design project common characteristics:
They take time to complete (calendar months)
They consume labor over the elapsed time schedule
(labor hours)
They produce defects of different severity
They deliver some size/scope
– This is the metric that needs to be tailored to the
design process
Characteristics Common to Design Projects
7. (#7)
What is “Size”?
• A proxy for the value and knowledge content of
the delivered system—what the system is worth
• Size can be tailored to the process or
development technology being used:
Front end: Unit of Need
Based on characteristics of the
statement of needs
• Requirements
• Function Points/Object Points
• IO Counts
• States/Events/Actions
• Use Cases
• Stories/Story Points
• Objects/Classes
• Components
• Design Pages
• Web Pages
Back end: Unit of Work
Based on the characteristics of the
system when built
• Lines of Code
• Statements
• Actions
• Modules
• Subsystems
• GUI Components
• Logic Components
• Logic Gates
• Tables
• Business Process Configurations
9. (#9)
Typical Productivity Metrics:
• Ratio based measures
Size/Effort
– LOC/PM, FP/PM
• They are not sufficient to explain software
development’s non-linear behavior
As size increases
As change staffing levels
• Solution
QSM’s Productivity Index
– Takes schedule, effort & size into account
– Also ties in quality/reliability
We Need Comprehensive
Productivity Metrics
QSM Mixed Application Data Base
Effective Source Lines of Code
P
e
r
s
o
n
M
o
n
t
h
s
0.01
0.1
1
10
100
1000
10000
100000
100 1000 10000 100000 1000000 10000000
10. (#10)
Production Equation
Delivered
System
Size
Effort Time Productivity
proportional
to
over at some
A measure of
Value Delivered
A measure of
Resources
Expended
A measure of
Duration Required
A measure of
Capability and
Difficulty of the
task
11. (#11)
Determining Development Team Efficiency
Developer Efficiency (PI) =
Functionality Developed
Effort x Time
A Higher value means less time and effort
12. (#12)
Productivity Index (PI)
(industry values by application type)
0 2 4 6 8 10 12 14 16 18 20 22 24
Productivity Index (PI) w/ ±1 Standard Deviation
Avionics
Business/Infrastructure
Command and Control
Firmware (ASIC/FPGA)
Process Control
Real Time
Scientific
System
Telecommunications
Information
Engineering
Real Time
14. (#14)
Objectives:
• Identify individuals who will participate
Coordinate benchmark analysis
Identify data sources
• Explain data requirements & definitions
• Establish schedule and deliverables
• Brief out benchmark process and objectives
Overview of QSM benchmarking process
Outline benchmark objectives
– Provides a “stake in the ground” for future comparison
– Focused on positive organizational improvement
– Provide a quantitative basis for providing realistic future
estimates
Good Start Kick-off Meeting
15. (#15)
Data Collection and Validation
Two
weeks
Benchmark
Consulting Team
Interviews
Scheduled
Data Collection Form
Data Validation
Interview (2 hours)
Action
Items &
Updates
Benchmark
Consulting Team
Data
Sign-off
Request
OK
Analysis Begins Coordinator
17. (#17)
• Step 1: Reconstruct a time series resource profile of all projects
being assessed.
Assess phase allocations of time, effort and overlap percentages.
• Step 2: Curve fit trend lines from the data to establish the
average behavior for schedule, effort, staffing, reliability and
statistical variation within the project set.
• Step 3: Identify outliers on the high and low end performance
spectrum.
• Step 4: High and low performers are evaluated in detail to
understand the significant factors influencing productivity and
quality performance.
• Step 5: Recommendations for improvement are formulated.
• Step 6: Business case for improvement is developed.
Method of Analysis
20. (#20)
Complete Project Curve Fit Reference Trend lines
Schedule,
effort and
staffing all
increase as the
size of the
delivered
product
increases.
Reliability at
delivery
decreases as
the size get
bigger.
Take away –
Keep the new
and modified
functionality as
small as
possible on
new projects to
contain cost
and schedule
and deliver
high quality.
22. (#22)
Identification of High and Low
Performing Projects
Low
Performing
Projects
High
Performing
Projects
The Internal Comparison Report identifies the project’s performance relative to the baseline
trend used in the comparison - in this case the internal baseline. This technique makes it easy
to identify the best and the worst performing projects. The projects identified in yellow shading
are the top performers' in this group. A more in-depth analysis of the high and low end
performer should identify the key drivers of productivity and quality.
Internal Comparison Report
24. (#24)
• Further analysis show that the following factors are the most
significant drivers of productivity and quality for XYZ Corporation:
Schedule Parallelism/Task Concurrency
Extreme Staffing Levels
Volume of Requirements Change
• Each factor will be evaluated independently and a prescription for
how to improve this activity is provided.
Most Significant Factors Affecting
Productivity & Quality
26. (#26)
In reconstructing the schedule and staffing profiles we noticed that
some projects had extreme amounts of schedule overlap between
the design, construction and test phases.
Observed Behavior
Example Project 48:
Staffing and Schedule
reconstruction:
Notice the 100%
overlap between
design & construction.
Also notice the long
time required to
stabilize the product
in the perfective
maintenance phase.
27. (#27)
Why is this Process Problematic?
Project 48 Low Productivity & Quality Project 47 High Productivity & Quality
Total parallelism in design and construction
results in more defects being introduced
into the system, because the team is
building from a unspecified design that is
likely to change significantly. This causes
more rework cycles resulting in low
productivity and quality.
A normal degree of overlap reduces the
rework because the team is building from
a complete and well thought out design.
This leads to optimal productivity,
schedule compression, and reliability.
28. (#28)
Root Cause
• The US center is constantly
battling unrealistic schedule
expectations. They have no
effective way to negotiate with
their customers or escalate the
issue to a higher level
management forum for
mediation-resolution.
• The Indian center enjoys a
significant labor cost
advantage. Because their labor
rates are so inexpensive they
are using large teams and
phase parallelism to try to
gain a competitive advantage
in schedule performance.
Recommendation
• Use base line trends from this
assessment to graphically
demonstrate schedule feasibility.
Consider setting up a board to
reconcile customer expectations with
realistic development capabilities.
• Use the one standard deviation trend
line for average staffing as a hard
constraint for all new projects.
Staffing levels above this provide no
schedule benefit and are detrimental
to cost and reliability.
• Constrain projects to no more than
50% phase overlap as a practical limit
to schedule compression that will not
impact productivity and quality.
Schedule Parallelism
30. (#30)
• After reconstructing all of the projects we noticed that there were
some extreme staffing levels which dramatically impacted
schedule, productivity and quality.
On the high end reaching over 100 FTE staff
On the low end using less than ½ of a FTE
Observed Behavior
Staffing profile for
Each of the projects
analyzed in this study
31. (#31)
• For projects with extremely large teams it has been well
documented that:
At some point large teams fail to produce any additional schedule
reductions.
Large teams have more human communication problems which result
in a large volume of defects all of which must be eliminate before
release.
Large teams have large cash flows which cause massive overruns if
they do not meet their schedule dates.
• For projects with part time staffing:
You lose productivity and continuity as a person juggles two or more
projects at one time.
Schedules are stretched out which often result in a higher probability
of requirements changes.
Why is Extreme Staffing Problematic?
32. (#32)
Data Stratified by Staffing Styles
The black circle is a
project that used
greater than 100
people. Notice that
it took longer to
develop, was more
expensive, had a
lower quality at
delivery and had a
lower productivity.
The red circles are
the projects that
used part time effort.
Notice that they
took longer to
develop, had lower
productivity but had
better quality and
were lower than
average cost.
33. (#33)
Root cause:
• The US development center has
some of its staff working several
projects at any given time. This
practice is abnormal and is a
direct result of cost cutting in a
recession. The only real penalty
for this approach is on schedule
performance.
• The Indian development center
had a single project that used
greater than 100 people. The
decision to staff at this level was
primarily driven by the sales
operation. Market intelligence
discovered that the customer’s
budget would support this
funding level on a time and
materials contract, so it was
staffed accordingly.
Recommendation:
• Re-evaluate resource constraints
when economic conditions permit.
1-2 FTE staff will provide better
schedule performance and still be
cost effective and produce
products with high reliability.
• Use a more comprehensive
definition of a successful project
outcome. Even though the initial
engagement may have been
profitable, the extreme staffing
approach may not be the best
strategy for the long term
relationship if the schedule and/or
reliability performance are poor
and the customer is unhappy.
Extreme Staffing
35. (#35)
• Requirements change during a project varied from 4%on the low
end up to a maximum of 64%
• The data showed that as the amount of requirements change
increases the following behaviors are observed
The schedule increases
The effort-cost increases
The reliability at delivery decreases
The productivity of the project decreases
Observed Behavior
36. (#36)
Observed Behavior
As requirements
change increases
the schedule gets
longer, the project
effort-cost
increases, the
reliability at
delivery
decreases and the
overall productivity
decreases.
37. (#37)
Root Cause:
Both development centers have
problems with requirements
change. There is a documented
requirements change process in
place. Some of the teams simply
don’t use the process.
Recommendation:
Enforce the existing requirements
management process. Establish
consequences if the process is not
followed.
Requirement Change Analysis
42. (#42)
Benefit of Implementing
Recommended Changes: $7.2 Million
There are
significant
benefits to be
realized by
implementing
the changes
identified in
this report.
This portfolio
of projects
could have
been
completed
3.5 months
quicker with
a cost
saving of
$7.2 million.
44. (#44)
• XYZ had trend lines established in last analysis with an average
completion date of 2006.
• In the Year over Year analysis we compare the 2006 baseline
trends with the 2007 baseline trends for schedule, effort and
quality (Mean Time to Defect) to see what improvement, if any,
has been realized.
Year over Year Assessment
47. (#47)
Year over Year Assessment
MTTD
Dataset Comp - MTTD
Comparison of XYZ 2007 Projects to XYZ 2006
Effective SLOC vs. MTTD 1st Month (Days)
MTTD 1st Month (Days) Values
Benchmark Reference Group:
XYZ 2006
Comparison Data Set:
XYZ 2007 Projects
Difference From Benchmark
at Min
Effective SLOC:
4836
24.29
16.11
-8.18
at 25% Quartile
Effective SLOC:
9134
14.46
10.32
-4.14
at Median
Effective SLOC:
21999
7.06
5.58
-1.49
at 75% Quartile
Effective SLOC:
119851
1.77
1.70
-0.07
at Max
Effective SLOC:
342258
0.75
0.82
0.06
Comparison breakpoints based on min, max, median and quartile values for the data set: XYZ 2007 Projects
MTTD 1st Month (Days) vs Effective SLOC
1 10 100 1,000
ESLOC (thousands)
0.1
1
10
100
MTTD1stMonth(Days)
When the 2007 baseline is
compared to 2006 There is
a across the board reduction
in reliability. It is most noteble
on the smaller size projects
When the 2007 baseline is
compared to 2006 there is
an across the board reduction
in reliability, most notably
on the smaller sized projects
Year over Year Performance Schedule
The black lines is the baseline that was
establish from last years consortium report.
The time frame from that assessment is
2006. The blue line and the squares are the
projects from this years assessment and are
baselined at an average completion date of
2007.
The 2007 baseline parallel and shifted lower
that the 2006 baseline indicating that there
is a reduction in reliability at delivery across
the entire size regime of projects in the
portfolio.
The quality drop off is most significant on
the very small projects losing anywhere
from 2 to 8 days. On the large system the
difference is very small.
XYZ 2007 Projects XYZ 2006 Avg. Line Style
48. (#48)
• XYZ Corporation has realized cost and schedule improvements
across the board for all sized projects.
The most significant cost savings have been on the large projects.
• XYZ corporation has not realized any gains in quality at delivery.
Small projects have the worst performance compared to the 2006
baseline.
– One extreme outlier with poor quality is impacting the results.
Large projects have essentially the same reliability as 2006.
Year over Year Analysis
Observations
50. (#50)
• Step 1: Select an appropriate benchmark comparison sample to
be used in the comparison.
Completion dates from same time frame
Same industry sector
Same size regime (SLOC, Staffing, Schedules)
• Step 2: Compare positioning of XYZ corporation’s projects relative
to benchmark data & trend lines.
• Step 3: Produce Average Difference Comparison Reports
• Step 4: Produce 5 Star Rating Report
• Step 5: Provide Observations & Recommendations
Method of Analysis
51. (#51)
• The following criteria were used in selecting the projects to be
included in this benchmark comparison:
The project completion date is between 2006 and 2008.
The project must be an IT project developed for the financial services
industry.
New and Modified SLOC must be greater that 3,000 and less than
500,000.
Average staffing must be greater than ½ a person and less than 150.
• This resulted in a comparison sample of 55 projects.
Financial Sector Benchmark
Criteria
53. (#53)
Requirements & Design Phase
Average Schedule Comparison
XYZ corporation
requires less time on
average to complete
the Requirements and
Design phase.
On the small projects
the time difference is
2 weeks to 1 month.
On the large projects
the time difference is
5 to 8.5 months.
54. (#54)
Requirements & Design Phase
Average Effort Comparison
XYZ corporation
requires less effort on
average to complete
the Requirements and
Design phase except
for the very large
projects.
On the small projects
the effort savings is 1-
3 person months. On
the large projects the
cost is more expensive
by 25 person months.
55. (#55)
Construction & Test Phase
Average Schedule Comparison
XYZ Schedule
performance is
from 2 weeks to 6
months better
than the
benchmark during
this phase.
56. (#56)
Construction & Test Phase
Average Effort Comparison
XYZ’s effort
performance varies
from 17 person months
better than the
benchmark during this
phase to 420 person
months worse than the
benchmark on the
largest projects.
57. (#57)
Mean Time To Defect at Initial Delivery
Average Comparison
MTTD at delivery is
essentially the
same for XYZ and
the benchmark.
58. (#58)
5 Star Assessment Comparison Report
1 star = bottom 20%/ 2 star = bottom 20% to 45%/ 3 star = bottom 45% to top 30%/ 4 star = top 30% to top 10%/ 5 star = top 10%
The composite rating for XYZ’s portfolio is 3 stars.
- 8 of the 14 projects achieved a 4 or 5 star rating.
- To bring the overall portfolio up to a 4 star rating you need to improve the 2 stars to a 3 star level.
59. (#59)
• In general XYZ corporation spends less time and effort on the
small and medium sized projects. However the large projects use
more people and are more expensive.
• 10 of the 14 projects perform at or above the industry benchmark
performance level.
• 8 of 14 projects are exceptional performers having achieved a 4
or 5 star composite rating
• Four projects have a 2 star composite rating and are the
recommended targets for improvements identified in the previous
section of this report.
• The 3 largest XYZ projects expended more effort, had larger staff
and found significantly more defects prior to release.
External Benchmark Observations
60. (#60)
• QSM Resources
- Performance Benchmark Tables
- Past Webinar Recordings
- Articles and White Papers
• www.qsm.com/blog
• Follow us on Twitter at @QSM_SLIM
Larry Putnam, Jr.
info@qsm.com
800-424-6755
Additional Resources