2. OUTLINE
Process Variation
Sampling Methods
Verification and Validation
Risk management tools
Six Levels of Cognition
based on Bloom's
Taxonomy
4. Process Variation
Process variation is the main course of quality problems, whether in
business (transactional) or production processes.
Inevitable change in the output or result of a system (process)
because all systems vary over time. Two major types of variations
are (1) Common, which is inherent in a system, and (2) Special,
which is caused by changes in the circumstances or environment.48
5. Process Variation
Causes of process variation
A result different than the range expected for a process. Process Variation may be causes by
a wide variety of factors including:
resource variation
human (e.g. setup employees did not set fill rate correctly)
wear and tear (equipment is slightly worn out)
Information system (e.g. did not translate targeted fill rate correctly)
line speed
temperature
new process
new equipment
new workers
new materials
16
6. Process Variation
1. Common And Special Causes
Are the two distinct origins of variation in a process, as defined in the statistical thinking and
methods of Walter A. Shewhart and W. Edwards Deming. Briefly, "common causes", also
called Natural patterns, are the usual, historical, quantifiable variation in a system, while
"special causes" are unusual, not previously observed, non-quantifiable variation.
17
7. Process Variation
Common-cause variation
Common cause variation is fluctuation caused by unknown factors resulting in a steady
but random distribution of output around the average of the data. It is a measure of the
process potential, or how well the process can perform when special cause variation
removed.
Common cause variation is a measure of the process’s potential, or how well the process
can perform when special cause variation is removed. Therefore, it is a measure of the
process technology. Common cause variation is also called random variation, noise,
noncontrollable variation, within-group variation, or inherent variation.
Common-cause variation is characterised by:
Phenomena constantly active within the system;
Variation predictable probabilistically;
Irregular variation within an historical
experience base; and
Lack of significance in individual high or
low values.
18
8. Process Variation
Walter A. Shewhart originally used the term chance cause. The term common cause was
coined by Harry Alpert in 1947. The Western Electric Company used the term natural
pattern. Shewhart called a process that features only common-cause variation as being in
statistical control. This term is deprecated by some modern statisticians who prefer the
phrase stable and predictable.
Common cause variation is the remaining variation after removing the special causes (non-
normal causes) due to one or more of the 5Ms and an “E” causes (Manpower, Material,
Method, Measurement, Machine, and Environment), also known as 6Ms (Man power,
Mother nature, Materials, Method, Measurements or Machine).
19
20
18
9. Process Variation
Examples of Common causes
Inappropriate procedures
Poor design
Poor maintenance of machines
Lack of clearly defined standard operating
procedures
Poor working conditions, e.g. lighting, noise,
dirt, temperature, ventilation
Substandard raw materials
Measurement error
Quality control error
Vibration in industrial processes
Ambient temperature and humidity
Normal wear and tear
Variability in settings
Computer response time
10. Process Variation
Special Cause Variation
The result of unpredictable errors. For example, a new admitter without proper training is
put on the midnight shift of a busy inner city emergency room. Clearly the number of
admitting errors is going to be very high until she obtains more training, coaching, and
experience. How many actual errors she will make is highly unpredictable. In this situation,
the root problem is not the process but one of the admitters. This chart will help to clearly
distinguish between special cause and common cause variation.
Special-cause variation always arrives as a surprise. It is the signal within a system.
Walter A. Shewhart originally used the term assignable cause. The term special-cause was
coined by W. Edwards Deming. The Western Electric Company used the term unnatural
pattern.
21
20
22
11. Process Variation
Examples Of Special causes
Poor adjustment of equipment
Operator falls asleep
Faulty controllers
Machine malfunction
Fall of ground
Computer crash
Poor batch of raw material
Power surges
High healthcare demand from elderly
people
Broken part
Abnormal traffic (click fraud) on web ads
Extremely long lab testing turnover time
due to switching to a new computer system
Operator absent
12. Process Variation
2. Process performance metrics
A performance metric is that which determines an organization's behaviour and
performance. Performance metrics measure an organization's activities and performance.
It should support a range of stakeholder needs from costumer, shareholders to employees.
In project management, performance metrics are used to assess the health of the project
and consist of the measuring of seven criteria: safety, time, cost, resources, scope, quality,
and actions.
There are a variety of ways in which organizations may react to results. This may be to
trigger specific activity relating to performance (i.e., an improvement plan) or to use the
data merely for statistical information. Often closely tied in with outputs, performance
metrics should usually encourage improvement, effectiveness and appropriate levels of
control.
Performance metrics are often linked in with corporate strategy and are often derived in
order to measure performance against a critical success factor.
23
24
25
26
13. Process Variation
Performance Metric Description
1. Percentage Defective
What percentage of parts contain one or more
defects?
2. Parts per Million (PPM)
What is the average number of defective parts
per million? This is the same figure in metric 1
above of “percentage defective” multiplied by
1,000,000.
3. Defects per Unit (DPU)
What is the average number of defects per unit?
4.
Defects per Opportunity
(DPO)
What is the average number of defects per
opportunity? (where opportunity = number of
different ways a defect can occur in a single part
Here is a list of the Performance Metrics which are spelled out and then given an acronym if one
is commonly used. The description is given of what this metric means.
27
14. Process Variation
5.
Defects per million
Opportunities (DPMO)
The same figure in metric 3 above of defects
per opportunity multiplied by 1,000,000
6.
Rolled throughput yield
(RTY)
The yield stated as a percentage of the number
of parts that go through a multi-stage process
without a defect.
7. Process sigma
The sigma level associated with either the
DPMO or PPM level found in metric 2 or 5
above.
8. Cost of poor quality
The cost of defects: either internal
(rework/scrap) or external (warranty/product)
27
15. Process Variation
Performance metrics–Discussion and examples
1. Percentage Defective
This is defined as the:
(Total number of defective parts)/(Total number of parts) X 100
So if there are 1,000 parts and 10 of those are defective, the percentage of defective parts is
(10/1000) X 100 = 1%
2. PPM
Same as the ratio defined in metric 1, but multiplied by 1,000,000. For the example given
above, 1 out of 100 parts are defective means that 10,000 out of 1,000,000 will be defective so
the PPM = 10,000.
NOTE: The PPM only tells you whether or not there exists one or more defects. To get a clear
picture on how many defects there are (since each unit can have multiple defects), you need to
go to metrics 3, 4, and 5.
27
16. Process Variation
Defects 0 1 2 3 4 5
# of Units 70 20 5 4 9 1
3. Defects per Unit
Here the AVERAGE number of defects per unit is calculated, which means you have to
categorize the units into how many defects they have from 0, 1, 2, up to the maximum number.
Take the following chart, which shows how many units out of 100 total have 0, 1, 2, etc., defects
all the way to the maximum of 5.
The average number of defects is DPU = [Sum of all (D * U)]/100 =
[(0 * 70) + (1 * 20) + (2 * 5) + (3 * 4) + (4 * 9) + (5 * 1)]/100 = 47/100 = 0.47
4. Defects per Opportunity
How many ways are there for a defect to occur in a unit? This is called a defect “opportunity”,
which is akin to a “failure mode”. Let’s take the previous example in metric 3. Assume that
each unit can have a defect occur in one of 6 possible ways. Then the number of
opportunities for a defect in each unit is 6.
Then DPO = DPU/O = 0.47/6 = 0.078333
17. Process Variation
5. Defects per Million Opportunities
This is EXACTLY analogous to the difference between the Percentage Defective and the PPM,
metrics 1 and 2, in that you get this by taking metric 4, the Defects per Opportunity, and
multiplying by 1,000,000. So using the above example in metric 3:
DPMO = DPO * 1,000,000 = 0.078333 * 1,000,000 = 78,333
6. Rolled through Yield
This takes the percentage of units that pass through several subprocesses of an entire
process without a defect.
The number of units without a defect is equal to the number of units that enter a process
minus the number of defective units. Let the number of units that enter a process be P. The
number of defective units is D. Then the first-pass yield for each subprocess or FPY is equal
to (P – D)/P. One you get each FPY for each subprocess, you multiply them altogether.
If the yields of 4 subprocesses are 0.994, 0.987, 0.951 and 0.990, then the
RTY = (0.994)(0.987)(0.951)(0.990) = 0.924 or 92.4%.
18. Process Variation
7. Process Sigma
What is a Six Sigma process? It is the output of process that has a mean of 0 and standard
deviation of 1, with an upper specification limit (USL) and lower specification limit (LSL) set
at +3 and -3, respectively. However, there is also the matter of the 1.5-sigma shift which
occurs over the long term.
8. Cost of poor quality
Also known as the cost of nonconformance, this takes the cost it takes to take care of
defects either
a) internally, i.e., before they leave the company, through scrapping, repairing, or
reworking the parts, or
b) externally, i.e., after they leave the company, through costs of warranty, returned
merchandise, or product liability claims and lawsuits.
This is obviously more difficult to calculate because the external costs can be delayed by
months or even years after the products are sold. It’s best, therefore, to measure those costs
which are relatively easy to calculate and quickly available, i.e., the internal costs of poor
quality.
19. Process Variation
Cp and Cpk process
Cp, and Cpk as statistical measures of process quality capability. Some segments in
manufacturing have specified minimal requirements for these parameters, even for some
of their key documents, such as advanced product quality planning and ISO/TS-16949.
Cp and Cpk are calculated when the process is not stable, yet one desires to estimate how
good the process might be if no Special Causes existed.
Cpk use a “best estimate” of the true process standard deviation (sigma-hat). Special Causes
are excluded from the data when appropriate, to estimate the “potential” natural process
variation. A theoretical process sigma-hat is calculated and Cp / Cpk estimated.
Cp= Process Capability. A simple and straightforward indicator of process capability.
Cpk= Process Capability Index. Adjustment of Cp for the effect of non-centered distribution.
28
29
20. Process Variation
Cp
This is a process capability index that indicates the process’ potential performance by
relating the natural process spread to the specification (tolerance) spread. It is often used
during the product design phase and pilot production phase.
Cp=
𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑎𝑡𝑖𝑜𝑛𝑅𝑎𝑛𝑔𝑒
6𝑠
=
(𝑈𝑆𝐿 − 𝐿𝑆𝐿)
6𝑠
Where USL is the Upper Specification Limit and LSL is the Lower Specification Limit.
When calculating Cp the evaluation considers only the quantity of process variation related to the specification
limit ranges. This method, besides being applicable only in processes with upper and lower specification limits,
does not provide information about process centralization.
30
21. Process Variation
Cpk (2-Sided Specification Limits)
This is a process capability index that indicates the process actual performance by
accounting for a shift in the mean of the process toward either the upper or lower
specification limit. It is often used during the pilot production phase and during routine
production phase.
Cpku = Cpk (Upper Specification Limit)
Cpkl = Cpk (Lower Specification Limit)
30
22. Process Variation
Outlier
Outlier is an observation point that is distant from other observations. An outlier may be
due to variability in the measurement or it may indicate experimental error; the latter are
sometimes excluded from the data set.
Although definitions vary, an outlier is generally considered to be a data point that is far
outside the norm for a variable or population (e.g., Jarrell, 1994; Rasmussen, 1988; Stevens,
1984). Hawkins described an outlier as an observation that “deviates so much from other
observations as to arouse suspicions that it was generated by a different mechanism”
(Hawkins, 1980, p.1). Outliers have also been defined as values that are “dubious in the eyes
of the researcher”(Dixon, 1950, p. 488) and contaminants (Wainer, 1976).
Outliers can arise from several different mechanisms or causes. Anscombe (1960) sorts
outliers into two major categories: those arising from errors in the data, and those arising
from the inherent variability of the data. Not all outliers are illegitimate contaminants,
and not all illegitimate scores show up as outliers (Barnett & Lewis, 1994). It is therefore
important to consider the range of causes that may be responsible for outliers in a given
data set. What should be done about an outlying data point is at least partly a function of
the inferred cause.
31
32
23. Process Variation
Outliers from data errors. Outliers are often caused by human error, such as errors in data
collection, recording, or entry. Data from an interview can be recorded incorrectly, or
miskeyed upon data entry.
Outliers from intentional or motivated mis-reporting. There are times when participants
purposefully report incorrect data to experimenters or surveyers.
Outliers from sampling error. Another cause of outliers or fringeliers is sampling. It is
possible that a few members of a sample were inadvertently drawn from a different
population than the rest of the sample.
Outliers from standardization failure. Outliers can be caused by research methodology,
particularly if something anomalous happened during a particular subject’s experience.
Outliers from faulty distributional assumptions. Incorrect assumptions about the
distribution of the data can also lead to the presence of suspected outliers (e.g., Iglewicz &
Hoaglin, 1993).
24. Process Variation
Outliers as legitimate cases sampled from the correct population. Finally, it is possible that
an outlier can come from the population being sampled legitimately through random
chance. It is important to note that sample size plays a role in the probability of outlying
values. Within a normally distributed population, it is more probable that a given data point
will be drawn from the most densely concentrated area of the distribution, rather than one
of the tails (Evans, 1999; Sachs, 1982). As a researcher casts a wider net and the data set
becomes larger, the more the sample resembles the population from which it was drawn,
and thus the likelihood of outlying values becomes greater.
Outliers as potential focus of inquiry. We all know that interesting research is often as much
a matter of serendipity as planning and inspiration. Outliers can represent a nuisance, error,
or legitimate data.33
25. Process Variation
Impact of Outliers on Distributions
Outliers are isolated extreme high or low values. If they exist, the distribution is skewed in the
direction of the outlier(s).
A. How to identify outliers:
a. Outside 2 standard deviations
b. Outside 3 standard deviations
c. Outside 99th %
d. Depends on the study, and the variable
B. Outlier Affect on Central Tendency
1. Has little impact on mode, median
2. Big impact on mean:
Extremely high values pull the mean up.
Extremely low values pull the mean down.
Ex. Age
Age 99 pulls mean up to 60
Age 10 pulls mean down to 19
3. In a normally distributed variable, there
are no extreme outliers.
C. Outlier Affect on Dispersion:
1. Big impact on range, variance, and
standard deviation.
2. Remove/transform them before
calculating standard deviation.
34
27. Sampling Methods
1. Acceptance sampling plans
Acceptance sampling uses statistical sampling to determine whether to accept or reject a
production lot of material. It has been a common quality control technique used in industry.
It is usually done as products leave the factory, or in some cases even within the factory.
Most often a producer supplies a consumer a number of items and a decision to accept or
reject the lot is made by determining the number of defective items in a sample from the
lot. The lot is accepted if the number of defects falls below where the acceptance number
or otherwise the lot is rejected.
Sample plans are used to protect against irregular degradation of levels of quality in
submitted lots below that considered permissible by the consumer. It will also protect the
producer in the sense that lots produced at permissible levels of quality will have a good
chance to be accepted by the plan.
35
28. Sampling Methods
Types of acceptance sampling plans
Sampling plans can be categorized across several dimensions:
Sampling by attributes vs. sampling by variables: When the item inspection leads to a
binary result (either the item is conforming or nonconforming) or the number of
nonconformities in an item are counted, then we are dealing with sampling by
attributes. If the item inspection leads to a continuous measurement, then we are
sampling by variables.
Incoming vs. outgoing inspection: If the batches are inspected before the product is
shipped to the consumer, it is called outgoing inspection. If the inspection is done by
the consumer, after they were received from the supplier, it is called incoming
inspection.
Rectifying vs. non-rectifying sampling plans: Determines what is done
with nonconforming items that were found during the inspection. When the cost of
replacing faulty items with new ones, or reworking them is accounted for, the sampling
plan is rectifying. 36
29. Sampling Methods
Single, double, and multiple sampling plans: The sampling procedure may consist of drawing
a single sample, or it may be done in two or more steps. A double sampling procedure means
that if the sample taken from the batch is not informative enough, another sample is taken.
In multiple sampling, additional samples can be drawn after the second sample.
36
37
36. Sampling Methods
2. Types of sampling
A sample is “a smaller (but hopefully representative) collection of units from a
population used to determine truths about that population” (Field, 2005)
3 factors that influence sample representative-ness
Sampling procedure
Sample size
Participation (response)
When might you sample the entire population?
When your population is very small
When you have extensive resources
When you don’t expect a very high response
38
38. Sampling Methods
Random sampling
is the purest form of probability sampling. Each member of the population has an equal
and known chance of being selected. When there are very large populations, it is often
difficult or impossible to identify every member of the population, so the pool of available
subjects becomes biased.
Disadvantages
If sampling frame large, this method impracticable.
Minority subgroups of interest in population may not be present in sample in
sufficient numbers for study.
39
38
39. Sampling Methods
Systematic sampling
is often used instead of random sampling. It is also called an Nth name selection
technique. After the required sample size has been calculated, every Nth record is
selected from a list of population members. As long as the list does not contain any
hidden order, this sampling method is as good as the random sampling method. Its only
advantage over the random sampling technique is simplicity. Systematic sampling is
frequently used to select a specified number of records from a computer file.39
40. Sampling Methods
ADVANTAGES:
Sample easy to select
Suitable sampling frame can be identified easily
Sample evenly spread over entire reference population
DISADVANTAGES:
Sample may be biased if hidden periodicity in population coincides with
that of selection.
Difficult to assess precision of estimate from one survey.
36
41. Sampling Methods
Stratified sampling
is commonly used probability method that is superior to random sampling because it
reduces sampling error. A stratum is a subset of the population that share at least one
common characteristic. Examples of stratums might be males and females, or managers
and non-managers. The researcher first identifies the relevant stratums and their actual
representation in the population. Random sampling is then used to select
a sufficient number of subjects from each stratum. "Sufficient" refers to a sample size large
enough for us to be reasonably confident that the stratum represents the population.
Stratified sampling is often used when one or more of the stratums in the population
have a low incidence relative to the other stratums. 39
42. Sampling Methods
Disadvantages:
First, sampling frame of entire population has to be prepared separately for each
stratum
Second, when examining multiple criteria, stratifying variables may be related to some,
but not to others, further complicating the design, and potentially reducing the utility of
the strata.
Finally, in some cases (such as designs with a large number of strata, or those with a
specified minimum sample size per group), stratified sampling can potentially require a
larger sample than would other methods
38
43. Sampling Methods
Cluster Sampling
Cluster sampling is an example of 'two-stage sampling' .
First stage a sample of areas is chosen;
Second stage a sample of respondents within those areas is selected.
Population divided into clusters of homogeneous units, usually based on geographical
contiguity.
Sampling units are groups rather than individuals.
A sample of such clusters is then selected.
All units from the selected clusters are studied.
Advantages :
Cuts down on the cost of preparing a sampling frame.
This can reduce travel and other administrative costs.
Disadvantages:
sampling error is higher for a simple random sample of same size.38
44. Sampling Methods
Difference Between Strata and Clusters
Although strata and clusters are both non-overlapping subsets of the population, they
differ in several ways.
All strata are represented in the sample; but only a subset of clusters are in the sample.
With stratified sampling, the best survey results occur when elements within strata are
internally homogeneous. However, with cluster sampling, the best results occur when
elements within clusters are internally heterogeneous
38
46. 3. Sampling Terms:
1. Consumer risk is the probability that a product will be manufactured that is defective and
shipped to the customer. A person with a customer-only focus will typically want to have a
very small consumer risk. A person with a producer-only focus typically is not very
concerned with consumer risk. Low consumer risk can sometimes be accomplished by
rigorous testing and quality control, which, when carried to an extreme in order to reach
zero consumer risk, can lead to very expensive products.
2. Producer risk is the probability that a product will be manufactured that is good, but is
rejected by the manufacturer's internal quality control processes before it is shipped to the
customer. A person with a producer-only focus will typically want to have a very small
producer risk. A person with a consumer-only focus typically is not very concerned with
producer risk. Low producer risk can be accomplished by lax testing and quality control,
which, when carried to an extreme in order to reach zero producer risk, can lead to very
poorly-performing or non-yielding products.
The key to high yielding and reliable products is in achieving a balance between these two
sometimes-competing goals.
Sampling Methods
41
47. Sampling Methods
3. Target Population is the entire group a researcher is interested in; the group about which
the researcher wishes to draw conclusions.
4. Independent Sampling are those samples selected from the same population, or different
populations, which have no effect on one another. That is, no correlation exists between the
samples.
6. Bias is a term which refers to how far the average statistic lies from the parameter it is
estimating, that is, the error which arises when estimating a quantity. Errors from chance will
cancel each other out in the long run, those from bias will not.
7. confidence level refers to the percentage of all possible samples that can be expected to
include the true population parameter. For example, suppose all possible samples were
selected from the same population, and a confidence interval were computed for each sample.
A 95% confidence level implies that 95% of the confidence intervals would include the true
population parameter.
49. Change Control and Configuration Management
CHANGE CONTROL
Change control within quality management systems (QMS) and information technology
(IT) systems is a formal process used to ensure that changes to a product or system are
introduced in a controlled and coordinated manner. It reduces the possibility that
unnecessary changes will be introduced to a system without forethought, introducing
faults into the system or undoing changes made by other users of software. The goals of
a change control procedure usually include minimal disruption to services, reduction in
back-out activities, and cost-effective utilization of resources involved in implementing
change.
Therefore, an effective change control system is a key component of any quality
assurance system.
42
43
52. CONFIGURATION MANAGEMENT SYSTEM
A configuration management system includes the set of policies, practices, and tools
that help an organization maintain software configurations. The primary purpose of a
configuration management system is to maintain the integrity of the software artifacts
of an organization. Consequently, configuration management systems identify the
history of software artifacts and their larger aggregate configurations, systematically
control how these artifacts change over time, and maintain interrelationships among
them.
Principles
Principle I: Protect critical data and other resources.
The process of developing software produces many artifacts. Some of these artifacts
include the definition of requirements, design specifications, work breakdown structures,
test plans, and code. All of these artifacts generally undergo numerous revisions as they
are created. The loss of such artifacts and their revisions can cause great harm (e.g.,
financial loss, schedule slip) to an organization. Thus, it is vital that these artifacts and
their interrelationships be reliably maintained. This implies that these artifacts are
always accessible to consumers or quickly recoverable when failure does occur.
Change Control and Configuration Management
44
53. Change Control and Configuration Management
Principle 2: Monitor and control software development procedures and processes.
An organization should define the processes and procedures that it uses to produce
artifacts. Such definition will provide a basis for measuring the quality of the processes and
procedures. However, to produce meaningful measures of the processes and procedures,
the organization must follow them. Consequently, the organization must monitor its
practitioners to ensure that they follow the software development processes and
procedures.
Principle 3: Automate processes and procedures when cost effective.
The automation of processes and procedures has two primary benefits. First, it guarantees
that an organization consistently applies them, which means that it is more likely to
produce quality products. Second, automation improves the productivity of the people
that must execute the processes and procedures because such automation reduces the
tasks that they must perform, which permits them to perform more work. 44
54. Change Control and Configuration Management
Principle 4: Provide value to customers.
Three issues ultimately affect the success of a product. The first one is that a product
must reliably meet the needs of its customers. That is, it must provide the desired
functionality and do it in a consistent and reliable manner. Second, a product should be
easy to use. Third, an organization must address user concerns and issues in a timely
manner. All three of these issues affect customer value, and a configuration
management tool should automate those practices that provide the greatest value to its
user community.
Principle 5: Software artifacts should have high quality.
There are many measures of product quality. Such measures attempt to identify several
qualities of a product, such as its adaptability, efficiency, generality, maintainability,
reliability, reusability, simplicity, and understandability.
Principle 6: Software systems should be reliable.
Software systems should work as their users expect them to function. They also should
have no significant defects, which means that software systems should never cause
significant loss of data or otherwise cause significant harm. Thus, these systems should
be highly accessible and require little maintenance.
44
55. Change Control and Configuration Management
Principle 6: Software systems should be reliable.
Software systems should work as their users expect them to function. They also should
have no significant defects, which means that software systems should never cause
significant loss of data or otherwise cause significant harm. Thus, these systems should
be highly accessible and require little maintenance.
Principle 7: Assure that products provide only necessary features, or those having high
value.
Products should only provide the required features and capabilities desired by their
users. The addition of nonessential features and capabilities that provide little, if any,
value to the users tends to lower product quality. Besides, an organization can better
use the expended funds in another manner.
Principle 8: Software systems should be maintainable.
Maintainable software systems are generally simple, highly modular, and well designed
and documented. They also tend to exhibit low coupling. Since most software is used for
many years, maintenance costs for large software systems generally exceed original
development costs 44
56. Change Control and Configuration Management
Principle 9: Use critical resources efficiently.
Numerous resources are used or consumed to develop software, as well as by the
software products themselves. Such resources are generally scarce and an organization
should use them as efficiently as possible.
Principle IO: Minimize development effort.
Human effort is a critical resource, but one that is useful to distinguish from those that
do not involve personnel. The primary motivation to efficiently use human resources is
to minimize development costs. In addition, the benefits of minimizing the number of
personnel used to develop software increases at a greater than linear rate. 44
57. Change Control and Configuration Management
45
CM IN HARDWARE AND PRODUCT
Configuration Management (CM) is the application of appropriate resources, processes,
and tools to establish and maintain consistency between the product requirements, the
product, and associated product configuration information.
58. Change Control and Configuration Management
CM facilitates orderly identification of product attributes, and:
Provides control of product information.
Manages product changes that improve capabilities, correct deficiencies, improve
performance, enhance reliability and maintainability, or extend product life.
Manages departures from product requirements.
BOM Management
Reuse assemblies/parts
Baseline Management
As-Built, As-Designed, As-Maintained tracking
Action Item tracking
Configuration Management best practices built-in
Embedded Rules base
Item Definition
Multi- and Single-level Used-On Queries
Change Tracking – more than just a form
Configuration Item Identification
Multiple Product Line Management 45
60. DEFINITION
is the generic name given to checking processes which ensure that
the product and process conforms to its specification and meets the
needs of the customer.
starts with requirements reviews and continues through design and
code reviews to product testing.
are independent procedures that are used together for checking
that a product, service, or system meets requirements
and specification and that it fulfills its intended purpose.
Verification and Validation
1
1
2
61. Verification and Validation
Verification and validation is an important key in quality tools and and
techniques.
The results of verification and validation forms an important
component in the safety case, which is a document used to support
certification.
Thorough verification and validation does not prove that the system
is safe or dependable, and there is always a limit to how much testing
is enough testing.
62. Verification is the confirmation, though objective
evidence, that the specified requirements have been
fulfilled. Verification tasks all point back to the
requirements. Does the design correctly and completely
embody the requirements? Is the implementation a
correct representation of the requirements? Is the
system being built right?
Validation is the confirmation, through objective
evidence, that the system will perform its intended
functions. The intended functions, and how well the
system performs those functions, are determined by the
customer. Did you create the system the customer really
wanted? Will the system fulfill the customer's needs? Is
this the right system for the customer?
3
3
DIFFERENCE
Verification and Validation
63. VERIFICATION TECHNIQUES
There are many different verification techniques but they all basically fall into 2 major categories - dynamic testing and static testing.
Dynamic testing - Testing that involves the execution of a system or component. Basically, a
number of test cases are chosen, where each test case consists of test data. These input
test cases are used to determine output test results. Dynamic testing can be further divided
into three categories - functional testing, structural testing, and random testing.
Functional testing - Testing that involves identifying and testing all the functions of the
system as defined within the requirements. This form of testing is an example of black-box
testing since it involves no knowledge of the implementation of the system.
Structural testing - Testing that has full knowledge of the implementation of the system
and is an example of white-box testing. It uses the information from the internal structure
of a system to devise tests to check the operation of individual components. Functional and
structural testing both chooses test cases that investigate a particular characteristic of the
system. 4
Verification and Validation
64. Random testing - Testing that freely chooses test cases among the set of all possible test
cases. The use of randomly determined inputs can detect faults that go undetected by
other systematic testing techniques. Exhaustive testing, where the input test cases
consists of every possible set of input values, is a form of random testing. Although
exhaustive testing performed at every stage in the life cycle results in a complete
verification of the system, it is realistically impossible to accomplish. [Andriole86]
Consistency techniques - Techniques that are used to insure program properties such as
correct syntax, correct parameter matching between procedures, correct typing, and
correct requirements and specifications translation.
Measurement techniques - Techniques that measure properties such as error proneness,
understandibility, and well-structuredness.
4
Verification and Validation
65. VALIDATION TECHNIQUES
There are also numerous validation techniques, including formal methods, fault injection, and
dependability analysis. Validation usually takes place at the end of the development cycle, and
looks at the complete system as opposed to verification, which focuses on smaller sub-systems.
Formal methods - Formal methods is not only a verification technique but also a validation
technique. Formal methods means the use of mathematical and logical techniques to
express, investigate, and analyze the specification, design, documentation, and behavior
of both hardware and software.
Fault injection - Fault injection is the intentional activation of faults by either hardware or
software means to observe the system operation under fault conditions.
Hardware fault injection - Can also be called physical fault injection because we are
actually injecting faults into the physical hardware.
5
Verification and Validation
66. Software fault injection - Errors are injected into the memory of the computer by
software techniques. Software fault injection is basically a simulation of hardware fault
injection.
Dependability analysis - Dependability analysis involves identifying hazards and then
proposing methods that reduces the risk of the hazard occuring.
Hazard analysis - Involves using guidelines to identify hazards, their root causes, and
possible countermeasures.
Risk analysis - Takes hazard analysis further by identifying the possible consequences of
each hazard and their probability of occuring.
5
Verification and Validation
68. Risk Management Tools
“Risk is all about uncertainty or, more importantly, the effect of
uncertainty on the achievement of objectives. The really successful
organizations work on understanding the uncertainty involved in
achieving their objectives and ensuring they manage their risks so as
to ensure a successful outcome.”
-Kevin Knight, International Organization for Standardization (ISO)
69. Risk Management Tools
What is Risk Management?
The objective of risk management is to increase the probability and impact of
positive events and decrease the impact and probability of negative events.
Good risk management helps a project’s stakeholders define the strengths and
weaknesses of a project, promoting awareness.
The process of identification, analysis and either acceptance or mitigation of
uncertainty in investment decision-making. Essentially, risk management occurs
anytime an investor or fund manager analyzes and attempts to quantify the
potential for losses in an investment and then takes the appropriate action (or
inaction) given their investment objectives and risk tolerance. Inadequate risk
management can result in severe consequences for companies as well as
individuals.
6
7
70. Risk Management Tools
Methods For Managing Risk
There are four main ways to manage risk: risk avoidance, risk transfer, risk reduction and risk
acceptance. Each is applicable under different circumstances. Some ways of managing risk fall
into multiple categories. Multiple ways of managing risk are often utilized simultaneously.
71. Risk Management Tools
Risk Avoidance (elimination of risk)
Completely avoiding an activity that poses a potential risk. While attractive, this is not always
practical. By avoiding risk we forfeit potential gains, be it in life, in business or in with
investments.
The Business Dictionary defines Risk Avoidance as a technique of risk management that
involves:
Taking steps to remove a hazard
Engaging in an alternative activity
End a specific exposure
Example: utility may opt to invest in nuclear
generation in lieu of coal generation to avoid
the foreseen risk of onerous green house
gas regulation.
8
6
72. Risk Management Tools
Risk Transfer (insuring against risk)
Most commonly, this is to buy an insurance policy. The risk is transferred to a third-party
entity (in most cases an insurance company). To be more clear, the financial risk is
transferred to a third-party. For example, a homeowner’s insurance policy does not
transfer the risk of a house fire to the insurance company, it only transfers the financial
risk. A house fire is still just as likely as before. Risk sharing is also a type of risk transfer. For
example, members assume a smaller amount of risk by transferring and sharing the
remainder of risk with the group.
Risk can be transferred away from the organization
managing the project.
Examples :
Warranty
Insurance
Contracting to third
parties
8
73. Risk Management Tools
Risk Reduction (mitigating risk)
This is the idea of reducing the extent or possibility of a loss. This can be done by increasing
precautions or limiting the amount of risky activity.
The process of identifying, assessing, and controlling, risks arising from operational factors
and making decisions that balance risk cost with mission benefits.
Example:
installing a security alarm
smoke detectors
wearing a seat belt or wearing a helmet
8
9
74. Risk Management Tools
Risk Retention (accepting risk)
Risk retention simply involves accepting the risk. Even if the risk is mitigated, if it is not
avoided or transferred, it is retained. Retention is effective for small risks that do not pose
any significant financial threat.
All businesses accept risk during their operations. Without risk, commerce would cease to
exist.
For good risk management it is important to determine a quantified level of risk the project
is willing to take.
Example: your project may be sensitive to future price adjustments in the market in which
you compete. If the volatility of prices is expected to be under a threshold defined by
management, management may accept the risk and proceed with the project.
8
9
75. Risk Management Tools
Tools and methods for
estimating and controlling risk
failure mode and effects analysis (FMEA),
hazard analysis and critical control points (HACCP),
critical to quality (CTQ)
analysis, health hazard analysis (HHA),
76. Risk Management Tools
Failure Modes and Effects Analysis (FMEA) Tool
Failure Modes and Effects Analysis (FMEA) is a systematic, proactive method for
evaluating a process to identify where and how it might fail and to assess the relative
impact of different failures, in order to identify the parts of the process that are most in
need of change.
Failure modes and effects analysis (FMEA) is a step-by-step approach for identifying all
possible failures in a design, a manufacturing or assembly process, or a product or service.
FMEA includes review of the following:
Steps in the process
Failure modes (What could go wrong?)
Failure causes (Why would the failure happen?)
Failure effects (What would be the consequences of each failure?)
10
11
77. Risk Management Tools
When to Use FMEA?
When a process, product or service is being designed or redesigned, after quality
functional deployment.
When an existing process, product or service is being applied in a new way.
Before developing control plans for a new or modified process.
When improvement goals are planned for an existing process, product or service.
When analyzing failures of an existing process, product or service.
Periodically throughout the life of the process, product or service
11
78. Risk Management Tools
Hazard analysis and critical control points or HACCP
A systematic preventive approach to food safety from biological, chemical, and physical
hazards in production processes that can cause the finished product to be unsafe, and
designs measurements to reduce these risks to a safe level. In this manner, HACCP is
referred as the prevention of hazards rather than finished product inspection.
The HACCP approach focuses on preventing potential problems that are critical to food
safety known as 'critical control points' (CCP) through monitoring and controlling each step
of the process. HACCP applies science-based controls from raw materials to finished
product. It uses seven principles standardized by the Codex Alimentarius Commission.12
80. Risk Management Tools
Benefits of HACCP
Although the main goal of HACCP is food protection, there are other benefits acquired
through HACCP implementation, such as:
Increase customer and consumer confidence
Maintain or increase market access
Improve control of production process
Reduce costs through reduction of product losses and rework
Increase focus and ownership of food safety
Business liability protection
Improve product quality and consistency
Simplify inspections primarily because of the recordkeeping and
documentation
Alignment with other management systems (ISO 22000)
12
81. Risk Management Tools
Critical to Quality
A critical to quality (CTQ) is the flowchart process of identifying quality features or
characteristics in regard to the customer and to identify the problems. This is the process
of analyzing the inputs and outputs and find out the path that influence the standard or
quality of process outputs. CTQ analysis consists of the physical measurement of height,
width, depth and weight. They depict the necessities of quality but have deficiency in the
specificity to be measurable.
The flowchart of critical to quality helps in the process of finding out quality features of
the product keeping in view the customer and also with the outlook to categorize the
problems.
CTQs (Critical to Quality) analyze the characteristics of the service or product that are
termed by both the internal and external customer. They may include the upper and
lower specification limits or any other factors related to the product or service. According
to the interpretation of a valued customer, a perfect CTQ analysis is an actionable and
qualitative business specification methodology. 13
82. Risk Management Tools
Steps to Implement and also create a CTQ Tree:
Determine the Basic Requirement of the Customer: Initially, the sigma team finds out the
basic requirement of the customers for the service or the given product. Generally, this
basic requirement is pointed out in the comprehensive terms in order to accomplish the
requirement of the customer.
Identify the First level of Requirements of Customers: Secondly, the sigma team finds out
two or three requirements that can solve the basic customer's need mentioned in the initial
stage of the critical to quality tree. This ensures that the phones are responded instantly by
the professionals.
Identify the Customer's Second Tier of Requirements: Thirdly, again the sigma team finds
out three or two requirements which can solve the basic customer's need mentioned in the
second stage of the critical to quality tree. This ensures that the professionals are available
round-the-clock to respond to the queries of the customers.
13
83. Risk Management Tools
Bring to an end when the Quantifiable Requirements Reaches the limit: The fourth step is
implemented when the team arrives at the requirement which can easily be measured.
Confirm Final Requirements with the Customers: The last step is applicable when all the
needs on the Critical to Quality tree reach a standard level after due confirmation with the
customer.
Advantages of CTQ Tree are:
It helps in transforming unspecific customer requirements into precise
requirements.
It aids sigma teams in detailing broader specification.
It gives assurance that all the characteristics of the requirements are to be
fulfilled.
13
84. Risk Management Tools
Health Hazard Analysis (HHA)
The Health Hazard Analysis/Assessment is used to systematically identify and evaluate
health hazards, evaluate proposed hazardous materials, and propose measures to
eliminate or control these hazards through engineering design changes or protective
measures to reduce the risk to a level acceptable to the customer.
The HHA evaluation phase determines the quantities of potentially hazardous materials or
physical agents (e.g., noise, radiation, heat stress, cold stress) involved with the system,
analyzes how these materials or physical agents are used in the system, estimates where
and how personnel exposures may occur and if possible the degree or frequency of
exposure involved.
Materials are evaluated if, because of their physical, chemical, or biological characteristics;
quantity; or concentrations, they cause or contribute to adverse effects in organisms or off-
spring, pose a substantial present or future danger to the environment, or result in damage
to or loss of equipment or property during the system's life cycle.
14
85. Risk Management Tools
The HHA Purpose:
Provide a design safety focus from the human health viewpoint.
Identify hazards directly affecting the human operator from a health standpoint.
86. Risk Management Tools
Steps on HHA Process
The first step of the HHA is to identify ergonomic hazards, quantities of potentially
hazardous materials, and exposure to physical agents (noise, radiation, heat stress, cold
stress) used with the system and its logistical support.
The next step is to analyze how these potential hazards are used in the system. Based on
this information, estimate occurrences of personnel exposures to include (if possible) the
degree or frequency of exposure.
The final step is to incorporate into the system design cost-effective controls to reduce
exposures to acceptable levels.
As the system design evolves, the HHA increases in fidelity and level of detail. Sources of data
for HHA include safety, test, and capabilities documentation, and lessons learned from legacy
systems. 15
88. Six levels of Cognition Based on Bloom’s Taxonomy
LEVEL DEFINITION
SAMPLE
VERBS
KNOWLEDGE
Student recalls or
recognizes information,
ideas, and principles
in the approximate
form in which they
were learned.
Write
List
Label
Name
State
Define
COMPREHENSION
Student translates,
comprehends, or
interprets information
based on prior
learning.
Explain
Summarize
Paraphrase
Describe
Illustrate
APPLICATION
Student selects, trans-
fers, and uses data
and principles to
complete a problem
or task with a mini-
mum of direction.
Use
Compute
Solve
Demonstrate
Apply
Construct
89. Six levels of Cognition Based on Bloom’s Taxonomy
ANALYSIS
Student distinguishes,
classifies, and relates
the assumptions,
hypotheses, evidence,
or structure of a
statement or question.
Analyze
Categorize
Compare
Contrast
Separate
SYNTHESIS
Student originates,
integrates, and
combines ideas into a
product, plan or
proposal that is new
to him or her.
Create
Design
Hypothesize
Invent
Develop
EVALUATION
Student appraises,
assesses, or critiques
on a basis of specific
standards and criteria.
Judge
Recommend
Critique
Justify
47
91. 15. http://216.54.19.111/~mountaintop/ssse/scopage_dir/ssse/ana.html
16. http://www.valuecreationgroup.com/process_variation.htm
17. http://en.wikipedia.org/wiki/Common_cause_and_special_cause_(statistics)
18. http://www.isixsigma.com/dictionary/common-cause-variation/
19. Shewhart, Walter A. (1931). Economic control of quality of manufactured product. New
York City: D. Van Nostrand Company, Inc. p. 7. OCLC 1045408.
20. Western Electric Company (1956). Introduction to Statistical Quality
Control handbook(1 ed.). Indianapolis, Indiana: Western Electric Co. pp. 23–
24. OCLC 33858387.
21. http://www.therevenuecyclenetwork.com/systemproblemsvsnonsystemproblems
22. Shewhart, Walter A. (1931). Economic control of quality of manufactured product. New
York City: D. Van Nostrand Company, Inc. p. 14. OCLC 1045408
23. Mark Graham Brown, Using the Right Metrics to Drive World-class Performance
24. Measuring Project Health Neville Turbit, 2008
25. Andy D. Neely, Business Performance Measurement: Theory and Practice
26. Mark Graham Brown, How to Interpret the Baldrige Criteria for Performance Excellence
27. http://4squareviews.com/2012/12/14/six-sigma-green-belt-process-performance-
metrics/
References:
92. Reference
28. http://www.isixsigma.com/tools-templates/capability-indices-process-capability/cp-cpk-pp-
and-ppk-know-how-and-when-use-them/
29. http://www.isixsigma.com/tools-templates/capability-indices-process-capability/process-
capability-cp-cpk-and-process-performance-pp-ppk-what-difference/
30. http://elsmar.com/pdf_files/CPK.pdf
31. Grubbs, F. E. (February 1969), "Procedures for detecting outlying observations in
samples", Technometrics 11 (1): 1–21, doi:10.1080/00401706.1969.10490657, "An outlying
observation, or "outlier," is one that appears to deviate markedly from other members of
the sample in which it occurs.“
32. Grubbs 1969, p. 1 stating "An outlying observation may be merely an extreme
manifestation of the random variability inherent in the data. ... On the other hand, an
outlying observation may be the result of gross deviation from prescribed experimental
procedure or an error in calculating or recording the numerical value.“
33. http://pareonline.net/getvn.asp?v=9&n=6
34. http://people.uncw.edu/pricej/teaching/statistics/outliers.htm
35. Kreyszig, Erwin (2006). Advanced Engineering Mathematics, 9th Edition. p. 1248. ISBN 978-
0-471-48885-9.
36. http://www.sqconline.com/about-acceptance-sampling
94. THANKS!
– On the [View] menu, point to [Master],
and then click [Slide Master] or [Notes
Master]. Change images to the one
you like, then it will apply to all the
other slides.