Version 5.0 1
Integrated Calibration
Closing the gap between Production and Expectations
Project Presentation
February 2003
Chris Cushman
Brian McMaster
Marilyn Tom
Version 5.0 2
In any evaluation process the real question is:
How good is the data I am gathering – how certain can I be that
these results are representative of the population of customer
contacts, how valid are the conclusions I can draw?
Lets remind ourselves of the QA data we gather – and where
these tests are targeted
In Our Original Thoughts:
Improvement & Confidence in the new Data…
As well as the improved ability to act on it!
What we Learned:
Version 5.0 3
The Original Situation
•  Quality Process was being realigned with Customer
CTQ’s
–  Need for more precise alignment to Customer headset
•  Various 6 Sigma projects have identified need for
calibration of QA measurement systems:
–  Symptom Code Accuracy
–  Echo
–  Concern Inquiry split etc
–  Case Base
Observation: There was a danger that too many calibration sessions take place,
all with different agendas – duplicating effort and taking up valuable time
Version 5.0 4
Opportunity
•  Integrate calibration sessions into one process
•  Leverage true potential of NICE system
–  Activate calibration feature
•  Store selected calibrated calls on-line
–  Have participants calibrate 10 calls at their desks, over 45
days
•  Listen to call and review screen selections of CSR
–  Generate reports to summarize performance – and use as
input for future Calibration meetings and other performance
improvement initiatives
Shift focus of calibrations session from listening to a single
individual call and discussionè to listening to multiple calls and
diagnosing and learning via variation
Version 5.0 5
Hitting the target is not always good enough –
but what is better and how do we know?
Hitting the target, little
variation – is this good?
You want to hit the target, narrow
variation and be centered – only
then do you have a good process!!
Hitting the target,a lot of
variation – is this good?
The bull's-eye is the
equivalent of the
Standard
Version 5.0 6
The Old Process
•  Calls selected by Quality from
NICE and recorded to tape
•  Single call played during session
– 2 sessions per month
•  Results discussed and coaching
actions identified
•  Repeated with different focus for
various 6 Sigma projects such as
Symptom Code Accuracy, ECHO
and Correct Utilization of Case
Base
Disadvantage:
•  No repeatability
•  Low number of calls (sample size)
•  No measurable data output
The Improved Process
•  12 Calls selected from NICE (2 Canadian)
–  Represent business needs and priorities
–  Utilize recorded screen inputs and
sound
•  Calibration test self paced
•  Evaluators evaluate all calls and record
results automatically
•  Test repeated within 45 days
–  New calls selected each period
•  Reports generated for discussion/
action in group calibration sessions i.e.
“the BIG meeting”
•  High level of confidence in QA process
data
Advantage:
•  High repeatability and reproducibility
•  Quantifiable data– performance metrics
•  High degree of confidence in data!
Version 5.0 7
Population of Calls
Determines measurement
system integrity
Determines process
performance
Determines confidence
level with data colleted
QA ‘Score’
reports
Integrated
Calibration
Sessions/Calls
Standard Setting
Day to Day
QA Evaluated
Calls
Process Control Reports
Improvement
Actions
Control Reports ( Gage R&R)
Reaction
Plans
Reaction
Plans
From a High level
Version 5.0 8
Why is this better?
•  Calibrations sessions/calls would reflect current
priorities and emphasis
–  In sufficient quantity to be statistically significant
•  Data gathered would give clear indicator of required
group session focus
•  Better use of resources and time
•  Standards for each program or client can be
established using same tools and process
Bottom line:
Increase accuracy of all QA tools or measurements and
significantly decrease calibration resource commitment
Version 5.0 9
Integrated Calibration Process Cycle
Calibration Calls
1 –Evaluation
2 –Evaluation
3 –Evaluation
4 –Evaluation
5 –Evaluation
6 –Evaluation
7 –Evaluation
8 –Evaluation
9 –Evaluation
10 –Evaluation
Calibration Calls
1 –Evaluation
2 –Evaluation
3 –Evaluation
4 –Evaluation
5 –Evaluation
6 –Evaluation
7 –Evaluation
8 –Evaluation
9 –Evaluation
10 –Evaluation
Calibration Calls
1 –Evaluation
2 –Evaluation
3 –Evaluation
4 –Evaluation
5 –Evaluation
6 –Evaluation
7 –Evaluation
8 –Evaluation
9 –Evaluation
10 –Evaluation
Calibration Calls
1 –Evaluation
2 –Evaluation
3 –Evaluation
4 –Evaluation
5 –Evaluation
6 –Evaluation
7 –Evaluation
8 –Evaluation
9 –Evaluation
10 –Evaluation
Calibration Calls
1 –Evaluation
2 –Evaluation
3 –Evaluation
4 –Evaluation
5 –Evaluation
6 –Evaluation
7 –Evaluation
8 –Evaluation
9 –Evaluation
10 –Evaluation
First Evaluation
Calibration Calls
1 –Evaluation
2 –Evaluation
3 –Evaluation
4 –Evaluation
5 –Evaluation
6 –Evaluation
7 –Evaluation
8 –Evaluation
9 –Evaluation
10 –Evaluation
Calibration Calls
1 –Evaluation
2 –Evaluation
3 –Evaluation
4 –Evaluation
5 –Evaluation
6 –Evaluation
7 –Evaluation
8 –Evolution
9 –Evaluation
10 –Evaluation
Calibration Calls
1 –Evaluation
2 –Evaluation
3 –Evaluation
4 –Evaluation
5 –Evaluation
6 –Evaluation
7 –Evaluation
8 –Evaluation
9 –Evaluation
10 –Evaluation
Calibration Calls
1 –Evaluation
2 –Evaluation
3 –Evaluation
4 –Evaluation
5 –Evaluation
6 –Evaluation
7 –Evaluation
8 –Evaluation
9 –Evaluation
10 –Evaluation
Calibration Calls
1 –Evaluation
2 –Evaluation
3 –Evaluation
4 –Evaluation
5 –Evaluation
6 –Evaluation
7 –Evaluation
8 –Evaluation
9 –Evaluation
10 –Evaluation
Second Evaluation
Same Calls
Repeated
within
45days
Reports
Match Within (kappa) Use Only When Data Drastically Trends Outside Control Limits
Match to Standard
• % Match
• % Repeatability
Cumulative data
over time è
Call Type Group Individual
Use Only When Data Drastically Trends Outside Control Limits
Version 5.0 10
Ø  Established process for creating a standard for a client/program
•  Defined the performance baseline = expectations
•  Outlined the process for getting there – SOP’s
•  Identified performance goals/guidelines
Ø  QA Calibration recommended (overall) standards
•  % Match to Standard (MTS)
–  Goal = 80%
•  % Repeatability
–  Goal = 90%
Ø  Results are broken down by Question by Section
and can be broken down by contact type
Implemented the Standards for Quality
Version 5.0 11
Some Initial Conclusions from the Dec/
Jan Pilot
•  Establishing the Standard proved to be an important step where
even the managers needed to be aligned in terms of the
evaluation
–  There was a significant opportunity to involve clients in this process
further differentiating the Percepta value proposition and managing
client’s expectations of CSS results
•  Results indicated that there needed to be a clear definition of
the evaluation standard for Evaluators (PDA’s)
–  Needed to clearly state operational definitions and guidelines
–  Needed to clearly communicate this standard to the group
–  Data suggested the Evaluators ‘standard’ was significantly
different from that of the ‘Standard Setters’
•  Due to extent of variation and ‘measurement points’ (95) we
made the recommendation and began the development stages
of creating a database to provide a more robust framework for
analysis
Version 5.0 12
Post Pilot
From the Pilot results we proceeded into completing
two full rounds of Integrated Calibration prior to the
completion of the data base with the following
results:
Version 5.0 13
59%
66%
74%
70%
66%
69%68%
67%
55%
60%
65%
70%
75%
Round 1 Round 2
CSS Compliance Skill OVERALL
73% 73%
79% 78%
81%
78%
77%
78%
65%
70%
75%
80%
85%
Round 1 Round 2
CSS Compliance Skill OVERALL
MATCH TO STANDARD
REPEATABILITY
INTEGRATED CALIBRATION – GRAPHICAL SUMMARY
ALLRESULTSTODATE
Results:
• Meeting minimum 70% for Match to Standard in Compliance component
• Averages above do not include Compliance Question #3 and Skill Question #11, as
these are also excluded from regular QA scores
ROUND1&2RESULTS
Round 1 Round 2 Round 1 Round 2
CSS 59% 66% 73% 73%
Compliance 74% 70% 79% 78%
Skill 66% 69% 81% 78%
Evaluator A 70% 69% 81% 85%
Evaluator B 67% 72% 78% 81%
Evaluator C 60% * 78% *
Evaluator D 64% 62% 81% 75%
Evaluator E 69% 68% 78% 82%
Evaluator F 64% 70% 80% 77%
Evaluator G 62% 65% 74% 74%
Evaluator H 67% 70% 80% 78%
Evaluator I 64% 69% 77% 71%
Evaluator J 71% 69% 76% 72%
Evaluator K 64% 68% 78% 72%
Evaluator L 68% 69% 71% 74%
Evaluator M 65% 64% 74% 74%
Evaluator N 70% 70% 80% 82%
MATCH TO
STANDARD
REPEATABILITY
Version 5.0 14
STATISTICALSIGNIFICANCE
Results
•  Within Match to Standard we observed a statistically significant
improvement in both the CSS and Skill portions of the QA Form
• No significant movement to repeatability
MATCH TO STANDARD
Component
Round 1
Score
Round 2
Score Change
Statistically
Significant?
CSS 59% 66% +7% Y
Compliance 74% 70% -4% Y
Skill 66% 69% +3% Y
REPEATABILITY
Component
Round 1
Score
Round 2
Score Change
Statistically
Significant?
CSS 73% 73% 0% N
Compliance 79% 78% -1% N
Skill 81% 78% -3% N
INTEGRATED CALIBRATION – GRAPHICAL SUMMARY
Version 5.0 15
Calibration Reporting Benefits
• Ability to identify evaluation skill opportunities per group and per individual evaluator
– By section
– By question
– By call type
– By Group(TL vs. QA
– By Program(Concern, Inquiry, FOC)
• Ability to validate success of coaching and training responses in improving evaluation skill
• Ability to identify inconsistencies in evaluation and call handling process
• Consistent/accurate method of assessing evaluation skill
– Currently QA Evaluation skill and consistency KPI measurement tool mirror the integrated
calibration process
• Effective tool for developing and documenting a unified standard of call handling
– Strengthens partnerships.(Client, training, Operations QA)
– Incorporates client perspective/requirements into evaluation process
Version 5.0 16
Q. What have we Gained?
A. Greater Efficiencies !
“Clearly while the results show an increase in time and
dedication to our centre’s calibration effort, we have
become more efficient with each evaluation calibrated. In
return we have gained confidence in quality results and
have been able to focus strategically in the key areas of
opportunity”
“We started with the concept of Integrating several different
Calibration efforts and we ended up Integrating the entire
QA effort from CSR to Client”
The following are some of the results:
Version 5.0 17
Average Cost / Time Per Evaluation - Old Method
*All Dollar Figures are in Canadian
Billable Non-billable * Combined
TL 33 4 132 $44.71 Y $5,901.72 - $5,901.72
PM 5 4 20 $62.60 Y $1,252.00 - $1,252.00
QA 11 4 44 $33.62 N - $1,479.28 $1,479.28
$7,153.72 $1,479.28 $8,633.00
$85,844.64 $17,751.36 $103,596.00
4
48
2352
Combined Billable/ Overhead
$44.05
* Represents Percepta's Overhead
No.# calls per Mth.
No.# calls per Yr.
Total Evaluations
Rate Billable?
Monthly
Yearly
Participant No.# Hrs. per month
Total hrs. per
month
Ave. Resource Cost per Eval. (Old)
Old Method of Calibration
Total Costs
Version 5.0 18
Average Cost Per Evaluation - New Method
Figures have been
adjusted based on
a 45 day cycle vs.
30 days with the
old method
Billable Non-billable * Combined
TL 33 1.33 43.89 $44.71 Y $1,962.32 - $1,962.32
PM 5 1.33 6.65 $62.60 Y $416.29 - $416.29
PDA 15 6.67 100.05 $33.62 N - $3,363.68 $3,363.68
$2,378.61 $3,363.68 $5,742.29
$28,543.34 $40,364.17 $68,907.51
13.33
159.96
2399.4
3.33
39.96
1318.68
3.33
39.96
199.8
3917.88
Combined Billable/ Overhead
$17.59
* Represents Percepta's Overhead
Participant No.# Hrs. per month
Total hrs. per
month
Total Costs
Billable?Rate
Ave. Resource Cost per Eval. (New)
New Method of Calibration
No.# calls per Mth. (PDA)
Total Evaluations (All)
No.# calls per Yr. (PDA)
Total Evaluations (PDA)
No.# calls per Mth. (TL's)
No.# calls per Yr. (TL's)
Total Evaluations (TL's)
No.# calls per Mth. (PM)
No.# calls per Yr. (PM)
Total Evaluations (PM)
Monthly
Yearly
Version 5.0 19
Efficiencies - New Method
$26.46
60.07%
3917.88
$44.05
$172,566.62
$103,596.00
$68,970.62
2352
3917.88
66.58%
Cost per Evaluation using Old Method
Number of Evaluations (new Method)
Percentage Increase
Cost Avoidance per Evaluation
Total cost of Calibration (old)
Annual Cost Avoidance
Total cost
Number of Evaluations (old Method)
Cost Avoidance Per Evaluation
Percentage Increase in Sample Size
Cost Avoidance Efficiencies (Annualized)
Total Evaluations Using New Method
Reduced Cost per Evaluation (%)
Assumptions…
• Based on Increased
Sample Size using
the New Method
• Greater Efficiency
Obtained per
Evaluation
NOTE
• Not only has the sample
size increased but it is
also more representative
to the types of work we
perform at the Centre!
Version 5.0 20
What Has Been Accomplished
•  Call Selection Process: Calls chosen as a cross section
representation of actual calls to the CRC (multiple disciplines)
•  Establishment of Standards: Input from quality assurance,
operations, subject matter experts, as well as clients
•  Self Paced Calibration: Relaxed environment for the regular
participants (PDA’s) which fits calibration into “their” busy
schedules thus efficiently using their time
•  Measurement System: Creation of a database and reporting
system to easily identify accomplishments and opportunities
•  Expandable: Process can easily accommodate additional contact
types/work in the future
•  Match to Standard (MTS): Is used for the internal QA Audit
Process
Version 5.0 21
What Has Been Gained
•  Alignment: Total Center Alignment including Management,
Team Leaders, QA and the Client
• Confidence: Introduced the concepts of repeatability and
reproducibility in data obtained
•  Efficiency: increased sample size using less resources and time
per evaluated call
•  Automated: Input from PDA calibrators no longer a manual
process
•  Strategic: Data gleaned from calibration is of substantial quality
and volume resulting in more actionable and directional
opportunities.
•  Increased Awareness: Integrated Client into the process
•  Customizable: Can cater to standards based on client needs
Version 5.0 22
What Needs To Be Done
• Transition: Handing over the Control Plan to Process Owners:
Quality Assurance (with this meeting)
•  Incorporation: Can be incorporated into the Siebel Launch (To
be handled by process owner)
•  Goals: Long Term MSA Goals: What Goals remain relevant?
Long term change to SOP’s by the process owner
•  Additional Reporting: Incorporation of more specific
Symptom Code, Echo (Tool Usage) ,Case base Usage Reports as
well as grouping by Contact Type (Concern/ Inquiry) or Call Type
(Financial Assistance, Recall, etc.)

Integrated calibration mgt presentation

  • 1.
    Version 5.0 1 IntegratedCalibration Closing the gap between Production and Expectations Project Presentation February 2003 Chris Cushman Brian McMaster Marilyn Tom
  • 2.
    Version 5.0 2 Inany evaluation process the real question is: How good is the data I am gathering – how certain can I be that these results are representative of the population of customer contacts, how valid are the conclusions I can draw? Lets remind ourselves of the QA data we gather – and where these tests are targeted In Our Original Thoughts: Improvement & Confidence in the new Data… As well as the improved ability to act on it! What we Learned:
  • 3.
    Version 5.0 3 TheOriginal Situation •  Quality Process was being realigned with Customer CTQ’s –  Need for more precise alignment to Customer headset •  Various 6 Sigma projects have identified need for calibration of QA measurement systems: –  Symptom Code Accuracy –  Echo –  Concern Inquiry split etc –  Case Base Observation: There was a danger that too many calibration sessions take place, all with different agendas – duplicating effort and taking up valuable time
  • 4.
    Version 5.0 4 Opportunity • Integrate calibration sessions into one process •  Leverage true potential of NICE system –  Activate calibration feature •  Store selected calibrated calls on-line –  Have participants calibrate 10 calls at their desks, over 45 days •  Listen to call and review screen selections of CSR –  Generate reports to summarize performance – and use as input for future Calibration meetings and other performance improvement initiatives Shift focus of calibrations session from listening to a single individual call and discussionè to listening to multiple calls and diagnosing and learning via variation
  • 5.
    Version 5.0 5 Hittingthe target is not always good enough – but what is better and how do we know? Hitting the target, little variation – is this good? You want to hit the target, narrow variation and be centered – only then do you have a good process!! Hitting the target,a lot of variation – is this good? The bull's-eye is the equivalent of the Standard
  • 6.
    Version 5.0 6 TheOld Process •  Calls selected by Quality from NICE and recorded to tape •  Single call played during session – 2 sessions per month •  Results discussed and coaching actions identified •  Repeated with different focus for various 6 Sigma projects such as Symptom Code Accuracy, ECHO and Correct Utilization of Case Base Disadvantage: •  No repeatability •  Low number of calls (sample size) •  No measurable data output The Improved Process •  12 Calls selected from NICE (2 Canadian) –  Represent business needs and priorities –  Utilize recorded screen inputs and sound •  Calibration test self paced •  Evaluators evaluate all calls and record results automatically •  Test repeated within 45 days –  New calls selected each period •  Reports generated for discussion/ action in group calibration sessions i.e. “the BIG meeting” •  High level of confidence in QA process data Advantage: •  High repeatability and reproducibility •  Quantifiable data– performance metrics •  High degree of confidence in data!
  • 7.
    Version 5.0 7 Populationof Calls Determines measurement system integrity Determines process performance Determines confidence level with data colleted QA ‘Score’ reports Integrated Calibration Sessions/Calls Standard Setting Day to Day QA Evaluated Calls Process Control Reports Improvement Actions Control Reports ( Gage R&R) Reaction Plans Reaction Plans From a High level
  • 8.
    Version 5.0 8 Whyis this better? •  Calibrations sessions/calls would reflect current priorities and emphasis –  In sufficient quantity to be statistically significant •  Data gathered would give clear indicator of required group session focus •  Better use of resources and time •  Standards for each program or client can be established using same tools and process Bottom line: Increase accuracy of all QA tools or measurements and significantly decrease calibration resource commitment
  • 9.
    Version 5.0 9 IntegratedCalibration Process Cycle Calibration Calls 1 –Evaluation 2 –Evaluation 3 –Evaluation 4 –Evaluation 5 –Evaluation 6 –Evaluation 7 –Evaluation 8 –Evaluation 9 –Evaluation 10 –Evaluation Calibration Calls 1 –Evaluation 2 –Evaluation 3 –Evaluation 4 –Evaluation 5 –Evaluation 6 –Evaluation 7 –Evaluation 8 –Evaluation 9 –Evaluation 10 –Evaluation Calibration Calls 1 –Evaluation 2 –Evaluation 3 –Evaluation 4 –Evaluation 5 –Evaluation 6 –Evaluation 7 –Evaluation 8 –Evaluation 9 –Evaluation 10 –Evaluation Calibration Calls 1 –Evaluation 2 –Evaluation 3 –Evaluation 4 –Evaluation 5 –Evaluation 6 –Evaluation 7 –Evaluation 8 –Evaluation 9 –Evaluation 10 –Evaluation Calibration Calls 1 –Evaluation 2 –Evaluation 3 –Evaluation 4 –Evaluation 5 –Evaluation 6 –Evaluation 7 –Evaluation 8 –Evaluation 9 –Evaluation 10 –Evaluation First Evaluation Calibration Calls 1 –Evaluation 2 –Evaluation 3 –Evaluation 4 –Evaluation 5 –Evaluation 6 –Evaluation 7 –Evaluation 8 –Evaluation 9 –Evaluation 10 –Evaluation Calibration Calls 1 –Evaluation 2 –Evaluation 3 –Evaluation 4 –Evaluation 5 –Evaluation 6 –Evaluation 7 –Evaluation 8 –Evolution 9 –Evaluation 10 –Evaluation Calibration Calls 1 –Evaluation 2 –Evaluation 3 –Evaluation 4 –Evaluation 5 –Evaluation 6 –Evaluation 7 –Evaluation 8 –Evaluation 9 –Evaluation 10 –Evaluation Calibration Calls 1 –Evaluation 2 –Evaluation 3 –Evaluation 4 –Evaluation 5 –Evaluation 6 –Evaluation 7 –Evaluation 8 –Evaluation 9 –Evaluation 10 –Evaluation Calibration Calls 1 –Evaluation 2 –Evaluation 3 –Evaluation 4 –Evaluation 5 –Evaluation 6 –Evaluation 7 –Evaluation 8 –Evaluation 9 –Evaluation 10 –Evaluation Second Evaluation Same Calls Repeated within 45days Reports Match Within (kappa) Use Only When Data Drastically Trends Outside Control Limits Match to Standard • % Match • % Repeatability Cumulative data over time è Call Type Group Individual Use Only When Data Drastically Trends Outside Control Limits
  • 10.
    Version 5.0 10 Ø Established process for creating a standard for a client/program •  Defined the performance baseline = expectations •  Outlined the process for getting there – SOP’s •  Identified performance goals/guidelines Ø  QA Calibration recommended (overall) standards •  % Match to Standard (MTS) –  Goal = 80% •  % Repeatability –  Goal = 90% Ø  Results are broken down by Question by Section and can be broken down by contact type Implemented the Standards for Quality
  • 11.
    Version 5.0 11 SomeInitial Conclusions from the Dec/ Jan Pilot •  Establishing the Standard proved to be an important step where even the managers needed to be aligned in terms of the evaluation –  There was a significant opportunity to involve clients in this process further differentiating the Percepta value proposition and managing client’s expectations of CSS results •  Results indicated that there needed to be a clear definition of the evaluation standard for Evaluators (PDA’s) –  Needed to clearly state operational definitions and guidelines –  Needed to clearly communicate this standard to the group –  Data suggested the Evaluators ‘standard’ was significantly different from that of the ‘Standard Setters’ •  Due to extent of variation and ‘measurement points’ (95) we made the recommendation and began the development stages of creating a database to provide a more robust framework for analysis
  • 12.
    Version 5.0 12 PostPilot From the Pilot results we proceeded into completing two full rounds of Integrated Calibration prior to the completion of the data base with the following results:
  • 13.
    Version 5.0 13 59% 66% 74% 70% 66% 69%68% 67% 55% 60% 65% 70% 75% Round1 Round 2 CSS Compliance Skill OVERALL 73% 73% 79% 78% 81% 78% 77% 78% 65% 70% 75% 80% 85% Round 1 Round 2 CSS Compliance Skill OVERALL MATCH TO STANDARD REPEATABILITY INTEGRATED CALIBRATION – GRAPHICAL SUMMARY ALLRESULTSTODATE Results: • Meeting minimum 70% for Match to Standard in Compliance component • Averages above do not include Compliance Question #3 and Skill Question #11, as these are also excluded from regular QA scores ROUND1&2RESULTS Round 1 Round 2 Round 1 Round 2 CSS 59% 66% 73% 73% Compliance 74% 70% 79% 78% Skill 66% 69% 81% 78% Evaluator A 70% 69% 81% 85% Evaluator B 67% 72% 78% 81% Evaluator C 60% * 78% * Evaluator D 64% 62% 81% 75% Evaluator E 69% 68% 78% 82% Evaluator F 64% 70% 80% 77% Evaluator G 62% 65% 74% 74% Evaluator H 67% 70% 80% 78% Evaluator I 64% 69% 77% 71% Evaluator J 71% 69% 76% 72% Evaluator K 64% 68% 78% 72% Evaluator L 68% 69% 71% 74% Evaluator M 65% 64% 74% 74% Evaluator N 70% 70% 80% 82% MATCH TO STANDARD REPEATABILITY
  • 14.
    Version 5.0 14 STATISTICALSIGNIFICANCE Results • Within Match to Standard we observed a statistically significant improvement in both the CSS and Skill portions of the QA Form • No significant movement to repeatability MATCH TO STANDARD Component Round 1 Score Round 2 Score Change Statistically Significant? CSS 59% 66% +7% Y Compliance 74% 70% -4% Y Skill 66% 69% +3% Y REPEATABILITY Component Round 1 Score Round 2 Score Change Statistically Significant? CSS 73% 73% 0% N Compliance 79% 78% -1% N Skill 81% 78% -3% N INTEGRATED CALIBRATION – GRAPHICAL SUMMARY
  • 15.
    Version 5.0 15 CalibrationReporting Benefits • Ability to identify evaluation skill opportunities per group and per individual evaluator – By section – By question – By call type – By Group(TL vs. QA – By Program(Concern, Inquiry, FOC) • Ability to validate success of coaching and training responses in improving evaluation skill • Ability to identify inconsistencies in evaluation and call handling process • Consistent/accurate method of assessing evaluation skill – Currently QA Evaluation skill and consistency KPI measurement tool mirror the integrated calibration process • Effective tool for developing and documenting a unified standard of call handling – Strengthens partnerships.(Client, training, Operations QA) – Incorporates client perspective/requirements into evaluation process
  • 16.
    Version 5.0 16 Q.What have we Gained? A. Greater Efficiencies ! “Clearly while the results show an increase in time and dedication to our centre’s calibration effort, we have become more efficient with each evaluation calibrated. In return we have gained confidence in quality results and have been able to focus strategically in the key areas of opportunity” “We started with the concept of Integrating several different Calibration efforts and we ended up Integrating the entire QA effort from CSR to Client” The following are some of the results:
  • 17.
    Version 5.0 17 AverageCost / Time Per Evaluation - Old Method *All Dollar Figures are in Canadian Billable Non-billable * Combined TL 33 4 132 $44.71 Y $5,901.72 - $5,901.72 PM 5 4 20 $62.60 Y $1,252.00 - $1,252.00 QA 11 4 44 $33.62 N - $1,479.28 $1,479.28 $7,153.72 $1,479.28 $8,633.00 $85,844.64 $17,751.36 $103,596.00 4 48 2352 Combined Billable/ Overhead $44.05 * Represents Percepta's Overhead No.# calls per Mth. No.# calls per Yr. Total Evaluations Rate Billable? Monthly Yearly Participant No.# Hrs. per month Total hrs. per month Ave. Resource Cost per Eval. (Old) Old Method of Calibration Total Costs
  • 18.
    Version 5.0 18 AverageCost Per Evaluation - New Method Figures have been adjusted based on a 45 day cycle vs. 30 days with the old method Billable Non-billable * Combined TL 33 1.33 43.89 $44.71 Y $1,962.32 - $1,962.32 PM 5 1.33 6.65 $62.60 Y $416.29 - $416.29 PDA 15 6.67 100.05 $33.62 N - $3,363.68 $3,363.68 $2,378.61 $3,363.68 $5,742.29 $28,543.34 $40,364.17 $68,907.51 13.33 159.96 2399.4 3.33 39.96 1318.68 3.33 39.96 199.8 3917.88 Combined Billable/ Overhead $17.59 * Represents Percepta's Overhead Participant No.# Hrs. per month Total hrs. per month Total Costs Billable?Rate Ave. Resource Cost per Eval. (New) New Method of Calibration No.# calls per Mth. (PDA) Total Evaluations (All) No.# calls per Yr. (PDA) Total Evaluations (PDA) No.# calls per Mth. (TL's) No.# calls per Yr. (TL's) Total Evaluations (TL's) No.# calls per Mth. (PM) No.# calls per Yr. (PM) Total Evaluations (PM) Monthly Yearly
  • 19.
    Version 5.0 19 Efficiencies- New Method $26.46 60.07% 3917.88 $44.05 $172,566.62 $103,596.00 $68,970.62 2352 3917.88 66.58% Cost per Evaluation using Old Method Number of Evaluations (new Method) Percentage Increase Cost Avoidance per Evaluation Total cost of Calibration (old) Annual Cost Avoidance Total cost Number of Evaluations (old Method) Cost Avoidance Per Evaluation Percentage Increase in Sample Size Cost Avoidance Efficiencies (Annualized) Total Evaluations Using New Method Reduced Cost per Evaluation (%) Assumptions… • Based on Increased Sample Size using the New Method • Greater Efficiency Obtained per Evaluation NOTE • Not only has the sample size increased but it is also more representative to the types of work we perform at the Centre!
  • 20.
    Version 5.0 20 WhatHas Been Accomplished •  Call Selection Process: Calls chosen as a cross section representation of actual calls to the CRC (multiple disciplines) •  Establishment of Standards: Input from quality assurance, operations, subject matter experts, as well as clients •  Self Paced Calibration: Relaxed environment for the regular participants (PDA’s) which fits calibration into “their” busy schedules thus efficiently using their time •  Measurement System: Creation of a database and reporting system to easily identify accomplishments and opportunities •  Expandable: Process can easily accommodate additional contact types/work in the future •  Match to Standard (MTS): Is used for the internal QA Audit Process
  • 21.
    Version 5.0 21 WhatHas Been Gained •  Alignment: Total Center Alignment including Management, Team Leaders, QA and the Client • Confidence: Introduced the concepts of repeatability and reproducibility in data obtained •  Efficiency: increased sample size using less resources and time per evaluated call •  Automated: Input from PDA calibrators no longer a manual process •  Strategic: Data gleaned from calibration is of substantial quality and volume resulting in more actionable and directional opportunities. •  Increased Awareness: Integrated Client into the process •  Customizable: Can cater to standards based on client needs
  • 22.
    Version 5.0 22 WhatNeeds To Be Done • Transition: Handing over the Control Plan to Process Owners: Quality Assurance (with this meeting) •  Incorporation: Can be incorporated into the Siebel Launch (To be handled by process owner) •  Goals: Long Term MSA Goals: What Goals remain relevant? Long term change to SOP’s by the process owner •  Additional Reporting: Incorporation of more specific Symptom Code, Echo (Tool Usage) ,Case base Usage Reports as well as grouping by Contact Type (Concern/ Inquiry) or Call Type (Financial Assistance, Recall, etc.)