CMMI High Maturity Best Practices HMBP 2010: Demystifying High Maturity Implementation Using Statistical Tools & Techniques by Sreenivasa M. Gangadhara,Ajay Simha and Archana V. Kumar
Demystifying High Maturity Implementation Using Statistical Tools & Techniques
-Sreenivasa M. Gangadhara
Ajay Simha
Archana V. Kumar
(Honewell Technology Solutions Lab)
.
presented at
1st International Colloquium on CMMI High Maturity Best Practices held on May 21, 2010, organized by QAI
CMMI High Maturity Best Practices HMBP 2010: Deploying High Maturity Practice...QAI
The document discusses deploying high maturity practices globally. It addresses the challenges of global deployment as well as the essential elements needed for robust deployment based on experience. The presentation covers challenges to high maturity, the intent of high maturity practices, foundational elements for success, and deploying high maturity practices globally. It emphasizes training, roles, and developing high maturity capability across locations.
CMMI - High Maturity Misconceptions and PitfallsRajesh Naik
This document discusses high maturity process implementation and common pitfalls. It begins by outlining the agenda, which includes process performance models, sub-process control, managing process improvements, and typical misconceptions and pitfalls. It then discusses how process performance models are complex because reality is complex, and outlines simplifications commonly made. It also notes that outcomes of complex processes are difficult to intuitively predict. The document concludes by identifying common issues seen in implementing high maturity practices and what should be seen in future high maturity implementations to address these issues.
This document provides an overview of an approach for right sizing design review plans for projects and programs. It discusses establishing a multi-tiered review approach including technical and peer reviews of lower-level design products, component design reviews, subsystem design reviews, and system-level reviews. It emphasizes the importance of planning the review approach, defining objectives and participation for each review level, and using lessons learned to improve efficiency while maintaining thoroughness.
This document summarizes a presentation about systems engineering processes for principle investigator (PI) mode missions. It discusses how PI missions face special challenges due to cost caps and lower technology readiness levels. It then outlines various systems engineering techniques used for PI missions, including safety compliance, organizational communication, design tools, requirements management, and lessons learned from past missions. Specific case studies from NASA's Explorers Program Office are provided as examples.
This document provides information about creating a cause and effect (XY) matrix for process improvement. It discusses the steps to create a XY matrix, including identifying key customer requirements and process inputs, rating their importance and relationship, and calculating scores to determine which inputs have the largest impact on outputs. An example is provided about using a XY matrix to identify which factors most affect customer satisfaction with coffee at an all ranks club.
This document provides an overview of NASA's software engineering benchmarking effort from February 2012. It discusses the background and motivation for the benchmarking, the organizations that were benchmarked, and the benchmarking team. It then summarizes some of the key learnings around training, testing, acquisition, small projects/processes, and the Capability Maturity Model Integration (CMMI). The document finds that mentoring is important for training, testing practices vary, acquisition of software requirements can be improved, tailoring is needed for small projects, and that CMMI adoption provides benefits like improved cost estimation and manageability.
This document summarizes efforts by NASA's Dryden Flight Research Center to change its project execution culture through implementing Critical Chain Project Management tools and philosophies. The Dryden Center has around 550 civil service employees and 600 contractors working on 40+ active projects of varying sizes from multiple customers. Previously, the culture was characterized by unclear priorities, budget-driven staffing, poorly defined work, projects often in delay, and severe multi-tasking leading to delays, overruns, and workforce burnout. The desired new culture focuses on reducing multi-tasking and stress, improving on-time performance, and allowing more time for training through implementing techniques like staggered milestones, identifying and resolving issues quickly, and using buffers to set priorities.
The document provides information about selecting solutions for process improvement projects. It discusses an 8-step problem solving process and lists tools that can be used, including brainstorming, process mapping, and selection matrices. The objectives are to understand idea generation principles, apply brainstorming tools, and use methods to select improvement ideas. Sources of solutions are identified, such as root causes, best practices, and past projects. Guidelines are given for generating many ideas through techniques like brainstorming and building on others' suggestions. Rules for effective brainstorming include allowing ideas without criticism and focusing on quantity over quality initially.
CMMI High Maturity Best Practices HMBP 2010: Deploying High Maturity Practice...QAI
The document discusses deploying high maturity practices globally. It addresses the challenges of global deployment as well as the essential elements needed for robust deployment based on experience. The presentation covers challenges to high maturity, the intent of high maturity practices, foundational elements for success, and deploying high maturity practices globally. It emphasizes training, roles, and developing high maturity capability across locations.
CMMI - High Maturity Misconceptions and PitfallsRajesh Naik
This document discusses high maturity process implementation and common pitfalls. It begins by outlining the agenda, which includes process performance models, sub-process control, managing process improvements, and typical misconceptions and pitfalls. It then discusses how process performance models are complex because reality is complex, and outlines simplifications commonly made. It also notes that outcomes of complex processes are difficult to intuitively predict. The document concludes by identifying common issues seen in implementing high maturity practices and what should be seen in future high maturity implementations to address these issues.
This document provides an overview of an approach for right sizing design review plans for projects and programs. It discusses establishing a multi-tiered review approach including technical and peer reviews of lower-level design products, component design reviews, subsystem design reviews, and system-level reviews. It emphasizes the importance of planning the review approach, defining objectives and participation for each review level, and using lessons learned to improve efficiency while maintaining thoroughness.
This document summarizes a presentation about systems engineering processes for principle investigator (PI) mode missions. It discusses how PI missions face special challenges due to cost caps and lower technology readiness levels. It then outlines various systems engineering techniques used for PI missions, including safety compliance, organizational communication, design tools, requirements management, and lessons learned from past missions. Specific case studies from NASA's Explorers Program Office are provided as examples.
This document provides information about creating a cause and effect (XY) matrix for process improvement. It discusses the steps to create a XY matrix, including identifying key customer requirements and process inputs, rating their importance and relationship, and calculating scores to determine which inputs have the largest impact on outputs. An example is provided about using a XY matrix to identify which factors most affect customer satisfaction with coffee at an all ranks club.
This document provides an overview of NASA's software engineering benchmarking effort from February 2012. It discusses the background and motivation for the benchmarking, the organizations that were benchmarked, and the benchmarking team. It then summarizes some of the key learnings around training, testing, acquisition, small projects/processes, and the Capability Maturity Model Integration (CMMI). The document finds that mentoring is important for training, testing practices vary, acquisition of software requirements can be improved, tailoring is needed for small projects, and that CMMI adoption provides benefits like improved cost estimation and manageability.
This document summarizes efforts by NASA's Dryden Flight Research Center to change its project execution culture through implementing Critical Chain Project Management tools and philosophies. The Dryden Center has around 550 civil service employees and 600 contractors working on 40+ active projects of varying sizes from multiple customers. Previously, the culture was characterized by unclear priorities, budget-driven staffing, poorly defined work, projects often in delay, and severe multi-tasking leading to delays, overruns, and workforce burnout. The desired new culture focuses on reducing multi-tasking and stress, improving on-time performance, and allowing more time for training through implementing techniques like staggered milestones, identifying and resolving issues quickly, and using buffers to set priorities.
The document provides information about selecting solutions for process improvement projects. It discusses an 8-step problem solving process and lists tools that can be used, including brainstorming, process mapping, and selection matrices. The objectives are to understand idea generation principles, apply brainstorming tools, and use methods to select improvement ideas. Sources of solutions are identified, such as root causes, best practices, and past projects. Guidelines are given for generating many ideas through techniques like brainstorming and building on others' suggestions. Rules for effective brainstorming include allowing ideas without criticism and focusing on quantity over quality initially.
Estimation and planning processes are critical for project success. Poor estimates can lead to cost overruns, schedule delays, and project failures. There are various estimation methods, each with advantages and limitations. Initial estimates are often too optimistic due to cognitive biases, pressure to win contracts, and lack of understanding of complexity and risk. Accurate and realistic estimates require a repeatable process using historical data and parametric modeling to avoid common challenges like underestimating requirements and resources.
This document provides templates and requirements for a Define Tollgate briefing for a project using the Black Belt methodology. It includes templates for listing the project charter and timeline, cross functional team, replication check, strategic alignment, business impact, high-level process map, and voices of the customer and business. The templates require inputs such as the problem statement, goal statement, project scope, timeline, sponsor, team members, potential similar past projects, related organizational metrics, operational and financial benefits estimates, key suppliers, inputs, processes, and outputs.
This document outlines the 8-step process and tollgate requirements for the Control phase of a National Guard Black Belt training module on continuous process improvement. The 8-step process includes validating problems, identifying performance gaps, setting improvement targets, determining root causes, developing countermeasures, seeing results through key performance indicators, confirming results, and standardizing successful processes. Tollgate requirements for the Control phase mandate updating benefits, standardizing processes, establishing process owner accountability, achieving results, implementing control plans, and creating a storyboard summary.
The document discusses roles and responsibilities in continuous process improvement (CPI). It describes the CPI deployment director as owning the deployment plan and communication plan. Project sponsors are responsible for the project charter and removing barriers. Process owners implement process changes. Black belts and green belts lead CPI projects under a master black belt. A DACI chart defines roles as drivers, approvers, contributors, and informers. CPI uses tollgates to approve project definitions, measures, analyses, improvements and controls.
Control y seguimiento del proyecto herramientasProColombia
The document discusses metrics for project management. It recommends creating a small set of key metrics focused on cost, schedule, quality and user satisfaction that provide high-level information for management. Additionally, it suggests defining operational metrics to identify specific issues and risks. The document provides guidance on establishing a metrics program, including keeping the metrics and collection cycle simple and cost-effective to support better decision making.
The document discusses process measurement and improvement techniques. It introduces an 8-step process for measuring performance, identifying issues, and improving processes. Key tools for measurement include process mapping, data collection plans, statistical analysis methods like measures of central tendency, control charts and process capability analysis. Learning objectives focus on understanding the importance of measurement in process improvement and applying statistical process control methods to understand common and special cause variation.
1. Define clear program needs, objectives, and constraints up front, including safety, to guide subsequent work.
2. Organize the program with a safety focus, clear management structure, and responsibilities.
3. Specify safety and reliability through fault tolerance, failure probability bounds, and proven practices/standards.
NG BB 53 Process Control [Compatibility Mode]Leanleaders.org
This document provides an overview of process control concepts and tools. It discusses an 8-step process for process improvement that includes control. Control plans are important to ensure improved processes remain stable. Measurement systems should be analyzed and process capability recalculated during control. Cultural issues can impact control and force field analysis can identify drivers and restraints. Standard operating procedures, control charts, and mistake proofing are discussed as control mechanisms.
This document provides templates and requirements for a Black Belt measure tollgate briefing for a process improvement project. It includes templates for:
- Project charter and timeline
- Detailed "as-is" process map
- Value stream map
- Key input, process, and output metrics
- Operational definitions
The templates require deliverables such as a current state process map, key metrics, data collection plan, baseline statistics, process capability analysis, and estimated benefits. The tollgate ensures projects are properly defined and measured before beginning improvement activities.
This document discusses tools and methods for assessing risk in projects. It introduces risk assessment and explains that risk management proactively identifies, assesses, and mitigates risks throughout a project. Several tools are described for assessing risk, including a risk standards matrix, risk identification matrix, and controls assessment matrix. The risk standards matrix prompts consideration of how a project may impact various areas. The risk identification matrix involves brainstorming risks, prioritizing their potential impact and likelihood, and focusing on high impact/likelihood risks. The controls assessment matrix identifies controls to mitigate high priority risks and ensures controls are sufficient.
This document discusses sustaining process improvements through project closeout and transitioning to process owners. It outlines the timeline for project closeout, including transitioning to the final process owner at a commissioning meeting and subsequent review meetings. Maintaining improvements requires executing process management, with elements like process maps, monitoring, and response plans. Process owners must institutionalize changes through cultural shifts and updated systems to drive permanent behavior changes.
Ejecución del proyecto gestión de problemasProColombia
The document discusses change management, software configuration management, and software defects. It provides information on:
1. Elements of a change management process including submitting change requests, reviewing requests, identifying feasibility, approving changes, and implementing changes.
2. The purpose of configuration management to establish and maintain software integrity throughout a project. It lists elements like configuration identification, change control, and status accounting.
3. Software defects are often caused by poor configuration management when the wrong code versions are used. Timely defect discovery and correction can reduce costs.
This document provides an overview of project chartering for continuous process improvement (CPI) projects. It discusses selecting CPI projects, developing a project charter, and who is responsible for chartering a project. The project charter defines the team's mission and includes the opportunity/problem statement, business case, goal statement, project scope, timeline, and team selection. It is a living document that may change over time. Developing an effective charter involves scoping the project based on the identified problem and determining proportional benefits, measurements, and boundaries.
This document discusses the use of S-curves in cost estimating. S-curves graphically represent the probability distribution of total project costs based on statistical modeling of individual work breakdown structure (WBS) element costs. The document explains that individual WBS element costs are modeled as probability distributions rather than single point estimates due to the presence of risk and uncertainty. It also discusses how the central limit theorem can be applied to statistically sum the costs of multiple WBS elements to determine the overall probability distribution of total project costs, which typically takes the form of a normal or lognormal distribution. The resulting S-curve shows the probability of avoiding cost overruns for different potential budget levels.
The document discusses a project management approach to source evaluation boards (SEBs) being implemented at NASA's Johnson Space Center. It aims to align SEB processes with project management principles by treating each SEB like a project, focusing on requirements, scheduling, teamwork, and control. Feedback from industry and assessments identified issues like unclear processes and schedules. The new approach establishes common vocabulary, templates, and training to bring more consistency to SEBs handled as projects.
This document discusses gathering the voice of the customer (VOC) in process improvement projects. It defines VOC as the expression of customer needs and desires. There are four key steps to gathering VOC: 1) identify all customers, 2) determine customer requirements, 3) validate requirements, and 4) prioritize requirements. VOC is important because customers define what quality means for a process. Both VOC and voice of the business inputs are important to understand in process improvement.
The document discusses NASA's software engineering processes and requirements. It provides an overview of 12 key software engineering processes, including requirements management, planning and monitoring, measurement and analysis, software assurance, verification, configuration management, product integration, and their benefits. It also indicates which roles are typically involved with each process.
This document provides an overview of multi-generation project planning (MGPP). MGPPs allow organizations to plan related improvement projects over multiple generations or releases. They help manage scope, capture additional ideas, identify replication opportunities, and communicate how individual projects fit into the overall strategy. The benefits, elements, and an example of an MGPP to reduce Army medical mobilization lead times are described.
An agency-wide team studied alternative designs for the CEV avionics configuration to identify reliability, mass drivers, and the effect on vehicle mass. The team used an iterative risk-driven design approach starting with the simplest possible design and building up fault tolerance based on risk assessments. Safety and reliability analyses informed design trades to improve failure tolerance. The goal was to first make the design work, then make it safe by adding diverse backup systems, make it reliable by adding more redundancy, and ensure it was affordable. This approach provided rationale for design decisions and optimized the configuration based on risk within power and mass constraints.
Overview of CMMI and Software Process ImprovementNelson Piedra
This document summarizes a presentation on software process improvement using CMMI and the IDEAL model. It discusses the key aspects of CMMI including maturity levels and process areas. It also outlines considerations for transitioning to CMMI level 2, including changes required from managers and practitioners. Finally, it shares experiences from initiating corporate-wide process improvements using the IDEAL framework.
This document discusses application lifecycle management using Microsoft tools and processes. It covers planning and tracking projects, modeling applications, developing collaboratively, automating builds, and managing the application lifecycle from design through deployment. Resources for branching strategies, build customization, and more are also referenced.
1. The document discusses integration and testing, including software quality assurance, integration approaches, and types of testing.
2. It provides an overview of roles in quality assurance and when quality assurance activities occur in the software development lifecycle.
3. Integration can be done using top-down or bottom-up approaches, progressively aggregating functionality while testing occurs in parallel with development.
Estimation and planning processes are critical for project success. Poor estimates can lead to cost overruns, schedule delays, and project failures. There are various estimation methods, each with advantages and limitations. Initial estimates are often too optimistic due to cognitive biases, pressure to win contracts, and lack of understanding of complexity and risk. Accurate and realistic estimates require a repeatable process using historical data and parametric modeling to avoid common challenges like underestimating requirements and resources.
This document provides templates and requirements for a Define Tollgate briefing for a project using the Black Belt methodology. It includes templates for listing the project charter and timeline, cross functional team, replication check, strategic alignment, business impact, high-level process map, and voices of the customer and business. The templates require inputs such as the problem statement, goal statement, project scope, timeline, sponsor, team members, potential similar past projects, related organizational metrics, operational and financial benefits estimates, key suppliers, inputs, processes, and outputs.
This document outlines the 8-step process and tollgate requirements for the Control phase of a National Guard Black Belt training module on continuous process improvement. The 8-step process includes validating problems, identifying performance gaps, setting improvement targets, determining root causes, developing countermeasures, seeing results through key performance indicators, confirming results, and standardizing successful processes. Tollgate requirements for the Control phase mandate updating benefits, standardizing processes, establishing process owner accountability, achieving results, implementing control plans, and creating a storyboard summary.
The document discusses roles and responsibilities in continuous process improvement (CPI). It describes the CPI deployment director as owning the deployment plan and communication plan. Project sponsors are responsible for the project charter and removing barriers. Process owners implement process changes. Black belts and green belts lead CPI projects under a master black belt. A DACI chart defines roles as drivers, approvers, contributors, and informers. CPI uses tollgates to approve project definitions, measures, analyses, improvements and controls.
Control y seguimiento del proyecto herramientasProColombia
The document discusses metrics for project management. It recommends creating a small set of key metrics focused on cost, schedule, quality and user satisfaction that provide high-level information for management. Additionally, it suggests defining operational metrics to identify specific issues and risks. The document provides guidance on establishing a metrics program, including keeping the metrics and collection cycle simple and cost-effective to support better decision making.
The document discusses process measurement and improvement techniques. It introduces an 8-step process for measuring performance, identifying issues, and improving processes. Key tools for measurement include process mapping, data collection plans, statistical analysis methods like measures of central tendency, control charts and process capability analysis. Learning objectives focus on understanding the importance of measurement in process improvement and applying statistical process control methods to understand common and special cause variation.
1. Define clear program needs, objectives, and constraints up front, including safety, to guide subsequent work.
2. Organize the program with a safety focus, clear management structure, and responsibilities.
3. Specify safety and reliability through fault tolerance, failure probability bounds, and proven practices/standards.
NG BB 53 Process Control [Compatibility Mode]Leanleaders.org
This document provides an overview of process control concepts and tools. It discusses an 8-step process for process improvement that includes control. Control plans are important to ensure improved processes remain stable. Measurement systems should be analyzed and process capability recalculated during control. Cultural issues can impact control and force field analysis can identify drivers and restraints. Standard operating procedures, control charts, and mistake proofing are discussed as control mechanisms.
This document provides templates and requirements for a Black Belt measure tollgate briefing for a process improvement project. It includes templates for:
- Project charter and timeline
- Detailed "as-is" process map
- Value stream map
- Key input, process, and output metrics
- Operational definitions
The templates require deliverables such as a current state process map, key metrics, data collection plan, baseline statistics, process capability analysis, and estimated benefits. The tollgate ensures projects are properly defined and measured before beginning improvement activities.
This document discusses tools and methods for assessing risk in projects. It introduces risk assessment and explains that risk management proactively identifies, assesses, and mitigates risks throughout a project. Several tools are described for assessing risk, including a risk standards matrix, risk identification matrix, and controls assessment matrix. The risk standards matrix prompts consideration of how a project may impact various areas. The risk identification matrix involves brainstorming risks, prioritizing their potential impact and likelihood, and focusing on high impact/likelihood risks. The controls assessment matrix identifies controls to mitigate high priority risks and ensures controls are sufficient.
This document discusses sustaining process improvements through project closeout and transitioning to process owners. It outlines the timeline for project closeout, including transitioning to the final process owner at a commissioning meeting and subsequent review meetings. Maintaining improvements requires executing process management, with elements like process maps, monitoring, and response plans. Process owners must institutionalize changes through cultural shifts and updated systems to drive permanent behavior changes.
Ejecución del proyecto gestión de problemasProColombia
The document discusses change management, software configuration management, and software defects. It provides information on:
1. Elements of a change management process including submitting change requests, reviewing requests, identifying feasibility, approving changes, and implementing changes.
2. The purpose of configuration management to establish and maintain software integrity throughout a project. It lists elements like configuration identification, change control, and status accounting.
3. Software defects are often caused by poor configuration management when the wrong code versions are used. Timely defect discovery and correction can reduce costs.
This document provides an overview of project chartering for continuous process improvement (CPI) projects. It discusses selecting CPI projects, developing a project charter, and who is responsible for chartering a project. The project charter defines the team's mission and includes the opportunity/problem statement, business case, goal statement, project scope, timeline, and team selection. It is a living document that may change over time. Developing an effective charter involves scoping the project based on the identified problem and determining proportional benefits, measurements, and boundaries.
This document discusses the use of S-curves in cost estimating. S-curves graphically represent the probability distribution of total project costs based on statistical modeling of individual work breakdown structure (WBS) element costs. The document explains that individual WBS element costs are modeled as probability distributions rather than single point estimates due to the presence of risk and uncertainty. It also discusses how the central limit theorem can be applied to statistically sum the costs of multiple WBS elements to determine the overall probability distribution of total project costs, which typically takes the form of a normal or lognormal distribution. The resulting S-curve shows the probability of avoiding cost overruns for different potential budget levels.
The document discusses a project management approach to source evaluation boards (SEBs) being implemented at NASA's Johnson Space Center. It aims to align SEB processes with project management principles by treating each SEB like a project, focusing on requirements, scheduling, teamwork, and control. Feedback from industry and assessments identified issues like unclear processes and schedules. The new approach establishes common vocabulary, templates, and training to bring more consistency to SEBs handled as projects.
This document discusses gathering the voice of the customer (VOC) in process improvement projects. It defines VOC as the expression of customer needs and desires. There are four key steps to gathering VOC: 1) identify all customers, 2) determine customer requirements, 3) validate requirements, and 4) prioritize requirements. VOC is important because customers define what quality means for a process. Both VOC and voice of the business inputs are important to understand in process improvement.
The document discusses NASA's software engineering processes and requirements. It provides an overview of 12 key software engineering processes, including requirements management, planning and monitoring, measurement and analysis, software assurance, verification, configuration management, product integration, and their benefits. It also indicates which roles are typically involved with each process.
This document provides an overview of multi-generation project planning (MGPP). MGPPs allow organizations to plan related improvement projects over multiple generations or releases. They help manage scope, capture additional ideas, identify replication opportunities, and communicate how individual projects fit into the overall strategy. The benefits, elements, and an example of an MGPP to reduce Army medical mobilization lead times are described.
An agency-wide team studied alternative designs for the CEV avionics configuration to identify reliability, mass drivers, and the effect on vehicle mass. The team used an iterative risk-driven design approach starting with the simplest possible design and building up fault tolerance based on risk assessments. Safety and reliability analyses informed design trades to improve failure tolerance. The goal was to first make the design work, then make it safe by adding diverse backup systems, make it reliable by adding more redundancy, and ensure it was affordable. This approach provided rationale for design decisions and optimized the configuration based on risk within power and mass constraints.
Overview of CMMI and Software Process ImprovementNelson Piedra
This document summarizes a presentation on software process improvement using CMMI and the IDEAL model. It discusses the key aspects of CMMI including maturity levels and process areas. It also outlines considerations for transitioning to CMMI level 2, including changes required from managers and practitioners. Finally, it shares experiences from initiating corporate-wide process improvements using the IDEAL framework.
Similar to CMMI High Maturity Best Practices HMBP 2010: Demystifying High Maturity Implementation Using Statistical Tools & Techniques by Sreenivasa M. Gangadhara,Ajay Simha and Archana V. Kumar
This document discusses application lifecycle management using Microsoft tools and processes. It covers planning and tracking projects, modeling applications, developing collaboratively, automating builds, and managing the application lifecycle from design through deployment. Resources for branching strategies, build customization, and more are also referenced.
1. The document discusses integration and testing, including software quality assurance, integration approaches, and types of testing.
2. It provides an overview of roles in quality assurance and when quality assurance activities occur in the software development lifecycle.
3. Integration can be done using top-down or bottom-up approaches, progressively aggregating functionality while testing occurs in parallel with development.
This document provides an agenda and overview for a webinar on quality coding features in Visual Studio 2012. The webinar will cover new tools for unit testing, code reviews, code analysis, and code clones. It will also review features for quality in requirements, development, and testing such as storyboarding, test environments, and exploratory testing. Attendees are encouraged to join the free webinar to learn about and see demonstrations of these Visual Studio 2012 features for improving code quality.
Releasing fast code - The DevOps approachMichael Kopp
Agile makes you Develop faster, DevOps also makes you Deploy faster but how do you make your Application faster?
Many currently used Performance Management practices don’t work anymore as they are too time consuming. It takes a new approach to track performance in Continuous Integration, get more value out of Load Testing and leverage production data for performance optimization.
We will show you real world examples on how the new DevOps approach can work.
Quality Coding: What’s New with Visual Studio 2012Imaginet
This document provides an agenda for a webinar on quality coding features in Visual Studio 2012. The webinar will review new unit testing, code review, code analysis, and code clone detection tools. It will also cover quality improvements for requirements, manual testing, exploratory testing, and automated testing. Attendees will see demonstrations of features like the unit test runner, code reviews, and exploratory testing in Microsoft Test Manager.
Quality Coding: What's New with Visual Studio 2012Imaginet
The newest release of Visual Studio 2012 is rich with new tools that enhance standard developer activities. In this session, we’ll review and demonstrate some of these new features, such as Unit Testing, Code Reviews, Code Clones, and other developer tools. Come join us for this free Webinar!
The document discusses software testing practices in agile development. It covers the technical and organizational challenges of testing in an agile environment where requirements are changing frequently. It emphasizes the need to test early and often through automation, and describes strategies like test-driven development and maintaining different levels of testing at the iteration and release levels to effectively test in short iterations with changing requirements.
This slide deck Introduces Chef and its role in DevOps. The agenda of the deck is as follows:
- A Review of DevOps
- BMs Continuous Delivery solution
- Introduction to Chef
- Chef and Continuous Delivery
Read more on DevOps: http://sdarchitect.wordpress.com/understanding-devops/
The document discusses software development processes and methodologies. It provides definitions of key concepts like software process and project management methodology. It then summarizes various software development models and processes like the Rational Unified Process, spiral development, incremental development, and the unified software development process. The unified process classifies iterations into inception, elaboration, construction and transition iterations. It also discusses the six models or views used in the unified process - use case model, analysis model, design model, implementation model, test model and deployment model.
The document describes Unosquare's delivery centers located across the United States and Mexico, which provide services such as software development, QA testing, and project management using agile methodologies and tools. It highlights benefits like lower costs, ease of collaboration due to proximity, and cultural similarities that make working with the Mexico delivery center attractive. Sample metrics are also provided showing the company's testing capabilities.
Software can impact many aspects of society and is found almost everywhere. Common problems in software development include projects not fulfilling customer needs, being difficult to extend and improve, lacking documentation, and having poor quality. Software engineering aims to produce software on time, reliably, and completely by applying a systematic and disciplined approach.
If you had an opportunity to build an application from the ground up, with testability a key design goal, what would you do?
In this presentation, we will look at just such a situation - a major, two year rewrite of a suite of core business systems. We will discuss how a system looks when testability is as important as functionality - and what it looks like when quality concerns are part of the initial design. We will look at the role of test automation and manual test in a modern project, and look at the tools and processes. The session will conclude with a demo of the latest visual test automation tool from MIT and a Q&A.
The document provides an introduction and overview of software testing concepts. It discusses software testing methodology, techniques and processes like the software development life cycle (SDLC), waterfall model, V-model and agile model. It also covers different testing types like unit testing, integration testing, system testing and acceptance testing. Key aspects covered include verification vs validation, test planning, defect management, and the software testing life cycle.
Evidence-based software process recovery uses data from software repositories to understand the actual development process used by a team. This allows comparison of the proposed process with the recovered process. Topic modeling of commits can identify developer topics like reliability, maintainability, and portability over time. Release patterns showing activity in source code, tests, builds and documentation near releases can also be recovered. Process recovery provides an objective view of the actual development process.
The document presents a tool called PSP PAIR that automatically analyzes performance data from the Personal Software Process (PSP) to identify problems and recommend improvements. PSP generates large amounts of data but analyzing it manually is time-consuming. PSP PAIR addresses this by developing a performance model and using it to analyze time estimation accuracy and other metrics from PSP data. It identifies potential problems and suggests actions like stabilizing productivity. An evaluation found PSP PAIR could help engineers using PSP by speeding up analysis and proposing targeted improvements. Future work includes validating the model with more data and expanding PSP PAIR to support the Team Software Process.
The document discusses software quality testing services provided by Independent Testing Service including software testing, localization and maintenance support. It outlines their technical expertise in areas like programming languages, databases, web servers and testing tools. The document also provides examples of their software testing process and a case study of projects they have worked on.
The document discusses model based design for embedded control systems. It introduces model based design, explaining that models represent the system, control, environment and stimuli. It discusses why model based design is used, including that it allows for cheaper, faster development with higher reliability. A case study is presented on using model based design for an excavator system, with models created at various levels of abstraction from continuous time physical models to discrete event software models. The document concludes by demonstrating the models in a co-simulation environment.
Return on Investment for a Design for Reliability ProgramAccendo Reliability
Last year we presented a paper on Design for Reliability (DFR), reviewing the benefits of a good DFR program and included some of the essential building blocks of DfR along with pointing out some erroneous practices that people today are using today.
We discussed a good DFR Program having the following attributes:
1. Setting Goals at the beginning of the program and then developing a plan to meet the goals.
2. Having the reliability goals being driven by the design team with the reliability team acting as mentors.
3. Providing metrics so that you have checkpoints on where you are against your goals.
4. Writing a Reliability Plan (not only a test plan) to drive your program.
A Good DFR Program must choose the best tools from each area of the product life cycle
• Identify
• Design
• Analyze
• Verify
• Validate
• Monitor and Control
The DFR Program must then integrate the tools together effectively.
Since then, we have developed a method to calculate the Return on Investment (ROI) from a Design for Reliability (DFR) program, also known as the DFR ROI. In this paper, we will discuss a method we have developed to calculate the Return on Investment (ROI) from a Design for Reliability (DFR) program, also known as the DFR ROI.
There are a number of factors involved in calculating the ROI for your DFR program, including:
1) Improved Warranty Rate (derived from your Reliability Maturity Level)
2) Current Warranty Rate
3) Cost per Repair
4) Cost of New Reliability Program
5) Savings from Losing a Customer
6) Volume
In this paper, we will show you how to calculate each of these to derive your DFR ROI.
Ravit Danino HP - Roles and Collaboration in AgileAgileSparks
Roles and collaboration have changed in Agile. Entire teams now work together throughout a sprint rather than having separate roles confined to specific phases. The whole team, including developers, business analysts, testers, and documentation specialists, collaborates continuously. They plan iterations together, provide feedback to each other, and ensure code meets quality standards through coffee and end-to-end testing. With Agile, customers also become key enablers by providing early feedback to help shape requirements and the product.
Similar to CMMI High Maturity Best Practices HMBP 2010: Demystifying High Maturity Implementation Using Statistical Tools & Techniques by Sreenivasa M. Gangadhara,Ajay Simha and Archana V. Kumar (20)
The document discusses the People Capability Maturity Model (PCMM), which is a framework for improving an organization's human resource practices. It describes PCMM as a conceptual model developed by the Software Engineering Institute to help organizations continuously improve how they attract, develop, motivate and retain employees. The document outlines the five levels of PCMM and lists some example process areas and benefits of adopting PCMM, such as improving ability to attract and retain talent and enhancing business performance. It also provides some case studies reporting positive results from companies that implemented PCMM.
This document provides an overview of the Capability Maturity Model Integration (CMMI) Version 1.2. It discusses the history and development of CMM models, the reasons for integrating them into CMMI, and the key components and concepts of CMMI including constellations, maturity levels, process areas, and continuous improvement. CMMI Version 1.2 focuses on systems engineering and software engineering and covers five levels of process maturity from initial/ad hoc processes to optimized, continuously improving processes.
This document provides an overview of the Capability Maturity Model Integration (CMMI) version 1.2. It discusses the history and development of CMM models. CMMI version 1.2 integrates different CMM models and focuses on systems engineering and software engineering. It describes the staged maturity levels from initial to optimizing, and the key process areas addressed at each level. Finally, it notes that organizations should not skip maturity levels, as each level provides foundations for continuous process improvement.
Software engineering is the application of engineering principles to software development. It includes systematic processes for developing, operating, and maintaining software. The document discusses the definition of software engineering, why it is important given historical issues with software projects, the software development life cycle including requirements, design, coding, testing, and maintenance phases, and core roles in software engineering projects.
The document provides an overview of ITIL (Information Technology Infrastructure Library). ITIL is a framework for IT service management that organizations implement to improve efficiency, reduce costs, and enhance customer satisfaction. The summary highlights key benefits of ITIL including increased productivity, reduced resolution times, improved quality of service, optimized agreements, and cost savings realized by companies that have adopted ITIL best practices. It also outlines the five core ITIL books that cover strategy, design, transition, operations, and continual improvement of IT services.
ITIL benefits include increased efficiency, customer satisfaction, agility, cost reduction, compliance, and realized savings. Companies implementing ITIL processes see benefits like increased productivity, reduced downtime, optimized service levels, and demonstrated IT value. Case studies found significant cost savings, such as Shell Oil saving $5M on software upgrades and Nationwide Insurance reducing outages 40%.
CMMI High Maturity Best Practices HMBP 2010: CMMI® FOR SERVICES: INSIGHTS AND...QAI
CMMI® FOR SERVICES: INSIGHTS AND BEYOND
-Rajesh Naik
QAI.
presented at 1st International Collquium on CMMI High Maturity Best Practices 2010 held on May 21,2010 organized by QAI
CMMI High Maturity Best Practices HMBP 2010: CMMI® FOR SERVICES: INSIGHTS AND...QAI
CMMI® FOR SERVICES: INSIGHTS AND BEYOND
-Rajesh Naik
QAI.
presented at 1st International Collquium on CMMI High Maturity Best Practices 2010 held on May 21,2010 organized by QAI
CMMI High Maturity Best Practices HMBP 2010: Process Performance Models:Not N...QAI
Process Performance Models:Not Necessarily Complex -Himanshu Pandey and Nishu Lohia(Aricent Technologies) presented at
1st International Colloquium on CMMI High Maturity Best Practices held on May 21, 2010, organized by QAI
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
CMMI High Maturity Best Practices HMBP 2010: Demystifying High Maturity Implementation Using Statistical Tools & Techniques by Sreenivasa M. Gangadhara,Ajay Simha and Archana V. Kumar
1. Demystifying High Maturity Implementation Using
Statistical Tools & Techniques
-Sreenivasa M. Gangadhara
Ajay Simha
Archana V. Kumar
(Honewell Technology Solutions Lab)
.
1 File Number
3. Introduction
• Interpretation & implementation of High Maturity practices in projects is a
challenge
• This paper attempts to “Demystify” the High Maturity Implementation by
using simple Statistical & Simulation Tools & Techniques
• The analytical approach presented in this paper is one of the many best
practices used in the organization
• Project’s specific dynamics needs to be factored when applied to projects
3 File Number
4. Key Takeaways…
At the end of this presentation, we will see one of the ways of…
• Assessing the confidence of project in meeting the project’s multiple goals
• Identifying the Critical Sub-Process with Quantitative justification
• Setting Quantitative project improvement goal
• Defining Sub-Process level Model and arriving at Critical & Controllable factors
• Arriving at “Probabilistic” Model from a “Deterministic” Model
• Doing “What-if” analysis for a proposed process improvement
• Demonstrating whether the proposed solution will meet the project’s objective
(end process result), before deploying the solution
• Demonstrating the usage of models at different stages of the project lifecycle
• Demonstrating that the improved process is statistically significant
4 File Number
5. Multi Goal Simulation Model
(Getting the confidence at the beginning of the project)
5 File Number
6. Problem Statement
• We have a new product release, in a similar product line
• Estimated Size of project is 195 Requirements
• Estimated Effort of project is 140 Person Months
• Goal is to complete the project
- Within 5% effort variance even in the worst scenario
- With a Quality goal of NOT more than 0.1 defects / requirement after
release
What is the confidence that the team has in
meeting this project Goal…???
6 File Number
7. Prediction Model
Note: Model is designed by using Crystal Ball Simulation Tool
Input factor distributions are arrived from the performance baseline
7 File Number
8. Certainty Levels
Prediction:
• 94.45% certain project will complete in 140 person months
• 98.71% certain project will complete with 5% more effort
• 82.83% certain project will complete with 5% less effort
• Project can deliver the product with a Quality Goal of 0.1
Defects / Req with a certainty of 78.51%
8 File Number
9. Model Representation
Effort Component Defect Component
Req Analysis & Dev Defect Injection Rate
-
+
Req Review Defect Removal Efficiency Defect Detection Rate
(DRE)
+ =
Req Rework Defect Fix Rate X Defect Leakage Rate
+ +
Design Defect Injection Rate
-
+
Design Review Defect Removal Efficiency Defect Detection Rate
(DRE)
+ =
Design Rework
Defect Leakage Rate
+ +
Input Assumptions
Historical Performance Baseline Measures:
• Effort / Req for each of the Development, Review, Test execution phases
Calculations
• Defect Injection Rate for each of development phases
• Defect Removal Efficiency (DRE) Rate for each of Review & Test phases
Detected Detected
• Defect Fix Rate of defects for each of the phases DRE = -------------------- = -----------------------
Total Present (Injected + Leaked)
9 File Number
10. Control Factors…
• Control Injection Rate (Reduce Injection Rate)
- Adopt the best Development Process from the existing Process
Composition which takes less effort and injects less defects
• Control Detection Rate (Increase Detection Rate)
- Adopt the best Review Process from the existing Process Composition
which takes less effort and uncover more defects
Next step is to find the control factors at sub-process level
10 File Number
16. Investigating Defect Removal Activities
Control Chart: Defect Detection Density
I Chart of DD by Phase
Req Design Code DIT SIT Post Release
3.0
2.5
2.0
Individual Value
1.5
1
1.0
0.5
1
_
UCL=0.177
0.0 X=0.064
LCL=-0.049
1 19 37 55 73 91 109 127 145 163
Observation
Is SIT a Critical Sub-Process…!!!???
16 File Number
17. Investigating Defect Removal Activities
Trend Chart: Defect Detection Density
3.000
2.500
2.000
1.500
1.000
0.500
0.000
Req DD Design DD Code DD DIT DD SIT DD Post Release DD
17 File Number
18. Investigating Defect Removal Activities
Trend Chart: Defect Detection Density
3.000
2.500
2.000
1.500
1.000
0.500
0.000
Req DD Design DD Code DD DIT DD SIT DD Post Release DD
Min Max Average
Min, Max and Mean values representation
18 File Number
22. Investigating Defect Removal Activities
Trend Chart: Defect Detection Density
3.000
2.500
2.000
1.500
1.000
0.500
0.000
Req DD Design DD Code DD DIT DD SIT DD Post Release DD
Min Max Average
22 File Number
23. Comparing Detection with Injection
Trend Chart: Comparing Defect Density of Detection with Injection
3.000
2.500
2.000
1.500
1.000
0.500
0.000
Req DD Design DD Code DD DIT DD SIT DD Post Release DD
Min Max Average Min Max Mean
Improvement Opportunity
23 File Number
24. Sub-Process Identification
Comparing Detection with Injection Defect Density:
Requirement
Design Phase Coding Phase
Phase
Min 0.154 0.053 0.783
Defect Detection
Max 1.250 0.667 1.154
Density
Mean 0.559 0.280 0.925
Min 1.111 0.577 1.000
Defect Injection
Max 2.833 1.464 1.923
Density
Mean 1.806 0.966 1.533
Mean Difference 1.247 0.686 0.608
Requirement phase Defect Density “Mean” is
relatively more compared to that of other phases
Requirement Phase
needs an attention
Requirement Phase is the Critical Sub-Process
24 File Number
25. Sub-Process Identification
Statistical Justification: Test of Hypothesis H0: μ1 = μ2
H1: μ1 ≠ μ2
Variance DD between Injection
to Detection
If P ≤ 0.05, Reject H0
Module / Feature Req Design Code
If P > 0.05, Accept H0
Exception Service 1.591 0.955 0.318
External Interface 1.619 0.905 0.429
DL Scheduler 1.679 1.071 0.464
Alert registry module 1.292 0.708 0.667
Rendering 0.947 0.737 0.474
GGF 1.586 0.690 0.586
Launchpad 1.059 0.647 0.471
CCD 0.778 0.630 0.667
Semaphore Service 1.609 0.826 0.652
FSS 1.211 0.947 0.684
File System Service 1.462 0.769 0.769
ECLF 1.733 0.467 0.467
Socket Library 1.792 0.625 0.958
Installation 1.500 0.833 0.778
GPC 1.238 0.714 0.333
MTL 1.250 0.375 1.000
Alert response module 1.250 0.625 0.938
Notification Service 1.136 1.045 0.864
Req phase DD is different
Blackberry Thick Client 1.308 0.462 0.769
Power Backup service 0.931 0.793 0.828
from Design & Code
Share Point Client 1.583 0.583 0.167
Process Service 1.087 0.304 0.652
Platform Resource Service 1.077 0.385 0.385
Statistically proven that
Power on/off 0.950 0.500 0.900
Thread Service 0.889 0.556 0.370
Req phase need an
License Management 0.815 0.593 0.667
attention…!!!
Periodic IPC Service 0.800 0.867 0.800
PDD 1.308 0.769 0.154
CALF 0.913 0.696 0.609
Alert System 1.038 0.500 0.423
25 File Number
27. Improvement Alternatives
1. By reducing the Defect Injection Rate by strengthening the
development process
2. By increasing the Defect Detection Rate by strengthening the
defect removal process
Second alternative is considered for the discussion
27 File Number
28. Req Defect Density Mean Shift
Histogram of Req Detection DD, Req Injection DD
Normal
1.6 Variable
Req Detection DD
1.4 Req Injection DD
Mean StDev N
1.2 0.5586 0.2732 30
1.806 0.4581 30
1.0
Density
0.8
0.6
0.4
0.2
0.0
0.0 0.4 0.8 1.2 1.6 2.0 2.4 2.8
Data
Req Defect Detection Mean need a Shift from 0.5586 to 1.806
28 File Number
29. Project Goal
Assume project sets a goal of 40% improvement in
Requirement Defect Detection Density mean
Histogram of Req Detection DD, Req Injection DD, 40% Imp Detectio
Normal
1.6 Variable
Req Detection DD
1.4 Req Injection DD
40% Imp Detection Req DD
1.2 Mean StDev N
0.5586 0.2732 30
1.0 1.806 0.4581 30
0.7820 0.3825 30
Density
0.8
0.6
0.4
0.2
0.0
0.0 0.4 0.8 1.2 1.6 2.0 2.4 2.8
Data
Note: Project team has to document the rationale for selecting 40% improvement
40% improvement is a mean shift from 0.56 to 0.78 Defcets / Req
29 File Number
31. Sub-Process Analysis
SW Development Process
Requirement Phase Design Phase Code Phase
Develop Review Rework Develop Review Rework Develop Review Rework Next Process Steps
Requirement Phase Elaboration
Change
Req Planning Req Capture Req Analyze Docum ent Review Rew ork Baseline
Managem ent
Planning Developm ent Review Change Managem ent
Process Process Process Process
Probable Process, Product & People Attributes
x2 - Req Complexity x10 - Req Volatility
x1 - Author's x8 - Reviewer's Domain Expertise
x3 - Development Effort / Req
Domain Expertise x9 - Review Effort / Req
x4 - Risk of Completeness of Req
x5 - Risk of Ambiguity of Req
x6 - Risk of Non Testable Req
x7 - Risk of Late arrival of Req
Which are the Critical Sub-Process Parameters?
Consider factors related to Process, Product & People
31 File Number
32. Sub-Process Analysis
SW Development Process
Requirement Phase Design Phase Code Phase
Develop Review Rework Develop Review Rework Develop Review Rework Next Process Steps
Sub-Process Identification
Change
Req Planning Req Capture Req Analyze Docum ent Review Rew ork Baseline
Managem ent
Planning Developm ent Review Change Managem ent
Process Process Process Process
Available Process, Product & People Attributes
x2 - Req Complexity x10 - Req Volatility
x1 - Author's x8 - Reviewer's Domain Expertise
x3 - Development Effort / Req
Domain Expertise x9 - Review Effort / Req
x4 - Risk of Completeness of Req
x5 - Risk of Ambiguity of Req
x6 - Risk of Non Testable Req
x7 - Risk of Late arrival of Req
Sub-Process Output Measure
Y1 = f (x1, x3, x8, x9, x10)
Req Defect Density = f (Author’s domain Expt, Dev Effort/Req, Reviewers Domain
Expt, Rev Effort/Req, Req Volatility)
32 File Number
33. Metrics Definition of selected input factors
Metrics Data
x Parameter Name Unit Definition / Guidelines
Type Type
Years of experience in the same or similar domain of
x1 = Author's Domain Expertise Objective Continuous Years
the author
Time spent by author on developing the requirements of
x3 = Development Effort / Req Objective Continuous Hrs / Req
the feature or module
Average Years of experience in the same or similar
x8 = Reviewer's Domain Expertise Objective Continuous Years
domain of the reviewers
Time spent by entire team in reviewing the requirement
x9 = Review Effort / Req Objective Continuous Hrs / Req
document
(# of Req [# of times] changed ) / (Total # of Req in the
x10 = Req Volatility Objective Continuous Ratio
feature or module)
33 File Number
35. Sub-Process Analysis
Req Defect Density: Output Measure – Req Defect Density (Y1)
I Chart of Req DD
1
1.25 UCL=1.235
1.00
Individual Value
0.75
_
X=0.559
0.50
0.25
0.00
LCL=-0.118
1 4 7 10 13 16 19 22 25 28
Observation
35 File Number
36. Sub-Process Analysis
Output Measure (Y1) Comparison with Input Measures (x’s)
Effect is seen in Output measure, for change in Input measures
36 File Number
37. Sub-Process Analysis
Analyze the Correlation
Scatterplot of Req DD vs Authors Doma, Dev Effort /, Reviewers Do, ...
Authors Domain Experience Dev Effort / Req Reviewers Domain Experience
1.5
1.0
0.5
Req DD
0.0
0 4 8 0 1 2 1 2 3
Rev Effort / Req Req Volatility
1.5
1.0
0.5
0.0
0.5 1.0 1.5 0.0 0.2 0.4
Inference:
• Reviewer’s Domain Experience, Review Effort / Req and Req Volatility has positive correlation
• Dev Effort / Req has a negative correlation
• Author's Domain Experience has no correlation
37 File Number
38. Model Building
Regression Analysis
P ≤ 0.05
R-Sq (adj) > 70%
(Thumb rule)
38 File Number
39. Model Building
Regression Analysis – Reduced Model
Note:
Though Dev Effort / Req & Req Volatility
are not statistically significant, they are
considered in the reduced model
Req Defect Density = 0.153 - 0.0618 Dev Effort / Req + 0.0608 Reviewers Domain
Experience + 0.48 Review Effort / Req + 0.23 Req Volatility
39 File Number
40. Statistical V/s Practical
Project objective is to “Uncover” more defects in the Requirement phase
Req Defect Density = 0.153
- 0.0618 Dev Effort / Req
+ 0.0608 Reviewers Domain Experience
+ 0.48 Review Effort / Req
+ 0.23 Req Volatility
To have more defect density in the Requirement phase, the Dev Effort / Req should
be low, Reviewers Domain Experience should be high, Review Effort should be high,
Req Volatility should be high (either few or all).
It practically does not make sense that, to have more Req DD the Req Volatility
should be high or spend less time in development activities. If we do so, then it
means we are intentionally introducing more defects, rather taking any proactive /
systemic measures to uncover more defects in Req phase.
Reviewers Domain Experience & Review Effort / Requirement are the factors which
could help in uncover more defects.
It means that, though “Dev Effort / Req, Reviewers Domain Experience, Review
Effort / Req & Req Volatility” are Critical Parameters, “Reviewers Domain
Experience, Review Effort / Req” are Control Parameter
40 File Number
41. How to use the model…?
At the beginning of the project:
Use the planned or anticipated values of the x’s to predict the defect
density, take the appropriate action if the predicted defect density is not
within the acceptable range, by changing the values of control factors
During execution of the project:
Use the actual values of the x’s to predict the defect density and validate
the model by actual values of the defect density
Calibrate the model with new data set and enhance the model
41 File Number
42. Probabilistic Model from Deterministic Model
(Study the process behavior by knowing the input distribution)
(“What-If” Analysis)
42 File Number
43. Probabilistic Model by Simulation
Use Crystal Ball tool to arrive at Simulation model
Define the simulation model in Crystal Ball tool for the “Regression
equation” by fitting the distribution for the input parameters and the forecast
for the predictor.
43 File Number
45. Process Improvement Steps
1. Do Root Cause Analysis (RCA) and identify the causes for defect leakage in Req
phase
2. Prioritize the causes (using Pareto)
3. Identify improvement alternatives in Req phase
4. Study the process behavior by simulating the process for the proposed
improvements (What-If analysis)
5. Study the process improvement having an impact on process output measure
(Goal)
6. Pilot the process in few projects
7. Analyze results
8. Institutionalize and deploy the process improvement in other projects
45 File Number
46. “What If” Analysis…???!!!
Assume that, if the new proposed process improvement suggest to have a balanced
composition of reviewers with experienced people (Min of 1.5 years, average of 2.4 to
the earlier of 0.5 years, average of 1.72, and an improvement in the review process
which results an additional review effort of mean 10Hrs and Std Deviation of 1.5 per
inspection, then, the New input parameter distributions looks like…
Reviewers Domain Experience Review Effort / Req
Old
New
46 File Number
47. “What If” Analysis…???!!!
Does the New proposed process meet
the project objective of 40%
improvement in Requirement Defect
Detection Density Mean?
Old
Req DD of old process = 0.556
Req DD of New proposed process = 0.847
% improvement to that of earlier process
= (0.847 – 0.556) / 0.556 = 52.34%
The “New” proposed process will
improve Req DD Mean by 52.34%
New
47 File Number
48. Probable Improvements in End Result
(Probable change in Post Release Defects and Effort Estimation)
48 File Number
49. What is possible changes in “End Measures”?
Req Defect Removel
Req Review Process
Efficiency (DRE)
Mean Std Dev Mean Std Dev
Current Performance
0.697 0.359 0.302 0.099
Measure
Note: Change the input distribution for Req Review New Proposed
Effort / Req & Req phase DRE 1.210 0.464 0.474 0.108
Performance Measure
49 File Number
50. Possible changes in “Effort”
Current Process New Proposed Process
50 File Number
51. Possible changes in “Quality”
Current Process New Proposed Process
Observation:
• Though there is increase in Req review effort, there is NOT much change in Total
Effort. Because, it is compensated by reduction in effort to fix the defects in later
phases
• However, there is improvement in the post release defect leakage measure
• The certainty of meeting quality goal of 0.1 defects / Req has increased from 78.5% to
83.0%
The “New” proposed process can be piloted
51 File Number
52. Pilot Improvements in new Project
(Validating the predicted improvements)
52 File Number
53. At the beginning of Project
Predict Req Detection DD from Planned or anticipated values of x’s
Regression Equation:
Req Defect Density = 0.153 - 0.0618 Dev Effort / Req + 0.0608 Reviewers Domain
Experience + 0.48 Review Effort / Req + 0.23 Req Volatility
1.4
1.22
1.10
1.2
1.01
1.00
0.98
0.98
0.95
0.90
1
0.83
Defect Density
0.70
0.68
0.8
0.6
0.4
0.2
0
1 2 3 4 5 6 7 8 9 10 11
Components
Predicted Req DD from Planned x's
53 File Number
54. During the Execution of Project
Monitor & Control the Input Parameters & Monitor Output Predictor
Output Measure (Y1) Input Measures (x’s)
I Chart of Predicted Req DD from Actual x
1.50
UCL=1.466
1.25
Individual Value
1.00 _
X=0.931
0.75
0.50
LCL=0.396
1 2 3 4 5 6 7 8 9 10 11
Observation
54 File Number
55. During the Execution of Project
Predict Req Detection DD from actual values of x’s
Regression Equation:
Req Defect Density = 0.153 - 0.0618 Dev Effort / Req + 0.0608 Reviewers Domain
Experience + 0.48 Review Effort / Req + 0.23 Req Volatility
1.4
1.22
1.10
1.09
1.09
1.08
1.06
1.2
1.02
1.01
1.00
1.00
0.98
0.98
0.95
0.90
0.87
1
0.83
Defect Density
0.79
0.77
0.74
0.72
0.70
0.68
0.8
0.6
0.4
0.2
0
1 2 3 4 5 6 7 8 9 10 11
Components
Predicted Req DD from Planned x's Predicted Req DD from Actual x's
55 File Number
56. During the Execution of Project
Compare the actual Defect Density with Predict from planned values of x’s
and actual values of x’s
1.33
1.27
1.25
1.4
1.22
1.20
1.15
1.10
1.09
1.09
1.08
1.06
1.2 1.03
1.02
1.01
1.00
1.00
0.98
0.98
0.95
0.91
0.90
0.89
0.87
0.86
1
0.84
0.83
Defect Density
0.79
0.77
0.74
0.72
0.70
0.69
0.68
0.8
0.6
0.4
0.2
0
1 2 3 4 5 6 7 8 9 10 11
Components
Predicted Req DD from Planned x's Predicted Req DD from Actual x's Actual Req DD
Note: Existing Regression equation may not be valid, because of change in process (Process Improvement)
Calibrate the Prediction Equation with New data set
56 File Number
57. Is Improvement Statistically Significant?
Staged Comparison:
I Chart of Req DD - Actual by Process Stage
Before After
1.75
UCL=1.674
1.50
1
1.25
_ Mean shift is
Individual Value
1.00 X=1.038
0.75
observed…!!!
0.50
LCL=0.402
0.25
0.00
1 5 9 13 17 21 25 29 33 37 41
Observation
57 File Number
58. Is Improvement Statistically Significant?
Statistical Justification: Test of Hypothesis H0: μ1 = μ2
Mean are same, there is NO significant difference
in DD between the data samples
H1: μ1 ≠ μ2
Mean are different, there is significant difference
in DD between the data samples
If P ≤ 0.05, Reject H0
If P > 0.05, Accept H0
The mean of two data set is
different
The improvement is
Statistically Significant
Measure and compare the end results after the
completion of the project…
58 File Number
59. Looking back…
We have seen one of the ways of…
• Assessing the confidence of project in meeting the project’s multiple goals
• Identifying the Critical Sub-Process with Quantitative justification
• Setting Quantitative project improvement goal
• Defining Sub-Process level Model and arriving at Critical & Controllable factors
• Arriving “Probabilistic” Model from a “Deterministic” Model
• Doing “What-if” analysis for a proposed process improvement
• Demonstrating whether the proposed solution will meet the project’s objective
(end process result), before deploying the solution
• Demonstrating the usage of models at different stages of the project lifecycle
• Demonstrating that the improved process is statistically significant
59 File Number
61. Acknowledgement
Authors wish to thank the Management of Honeywell Technology
Solutions Pvt, Ltd, Bangalore for giving an opportunity to present this
paper
Thanks to Venkatachalam V. & Dakshina Murthy for their guidance &
support
61 File Number
62. Contact Details
Office Address: Sreenivasa M Gangadhara
Honeywell Technology Solutions Ltd., Six Sigma Black Belt
151/1, Doraisanipalya, Bannerghatta Road Functional Specialist-Process
Bangalore – 560 226 Sreenivasa.gangadhara@honeywell.com
Karnataka State, India. Mobile: +91-98804 24780
+91-80-2658 8360
+91-80-4119 7222 Ajay Simha
+91-80-2658 4750 Fax Six Sigma Green Belt
Principal Engineer
Ajay.simha@honeywell.com
Mobile: +91-98864 99404
Amit Bhattacharjee
Six Sigma Black Belt
Principal Engineer
Amit.bhattacharjee@honeywell.com
Mobile: +91-99860 22908
Archana Kumar
Principal Engineer
Archana.kumar@honeywell.com
Mobile: +91-97407 77667
62 File Number