Know what key X’s to Analyze (Initial filtering from Process Mapping)
With multiple x’s that may lead to high variability want
to focus on the noise/uncontrolled type variables first
Process variation due to:
Similar type variables:
Differences in variables over time:
Week to Week
Month to Month
Quarter to Quarter
“ Discrete” variables “ Continuous” variables
Another look at Noise Variables
For Discrete Input Variables
Test for Variability within a piece
Example : Four procedures per patient visit
Test for Variability within a batch
Example : Variability across procedures by physician
Test for Variability across batches
Example : Variability across procedures within a month
For Continuous Input Variables
Test for Variability within a time span
Example : Ten insurance coverage records per day
Test for Variability across short time spans
Example : Variability across week
Test for Variability across longer time times
Example : Variability across months, quarters or longer
A Multi Vari Approach
Determine if the variables are continuous or discrete
Gather data and study key inputs impacting output (X vs. Y)
Look at the X’s and consider which are causing variability in the process output
Go back and look at X’s again – missing any key inputs to study?
Look for curves, groupings and patterns in continuous data sets
Can use the same approach whether output (Y) is continuous or discrete
Choose and apply appropriate analysis tool
A Multi Vari Approach (cont…)
Study controlled and uncontrolled (noise) inputs but..
Focus on uncontrolled inputs first:
Variation in the Noise variables can produce dramatic mean shifts and changes in variability that lead to process instability
These sources of variation must be attacked first before leveraging the important controlled input variables in a systematic way
Identify similar processes and study variability differences:
Insurance Carrier to Insurance Carrier
Clinic Department to Clinic Department
Location to Location (Wisconsin, Minnesota, Iowa)
Coder to Coder ; Insurance follow up to Insurance follow up staffer
Differences in process variation over time
Week to Week
Month to Month
Complete Multi-Vari studies to identify potential key inputs Review Data and Prioritize Key Input Variables
Main Effects Plots
T-tests comparing two groups
Multi vari charts can be used to investigate relationships among variables We will review how some of these tools were used for three projects within Gundersen Lutheran Health System. Involving discrete input variables.
Analysis of Variance (ANOVA) and Hypothesis Testing
Anova studies used to perform statistical tests for comparing:
Performed to answer your hypothesis (Are Insurance Claim denials marked patient responsibility always true?):
Assumption: All PR (patient responsibility) denial codes generated by Insurance Carrier requires no investigation and can be transferred directly to patient
Null Hypothesis = PR denials are the same
Alternative Hypothesis = PR denials are not the same
The statistical test will generate a probability (p) value for your hypothesis which is based on the assumption there is no difference.
Guideline: If P value < .05 this indicates there is a difference
We will look at how a study was performed for the
above scenario but let’s review Hypothesis testing further
Analysis of Variance Studies (ANOVA) For Discrete inputs (x) Continuous outputs (y)
Properly handle uncertainty
Prevent the omission of important information
Manage the risk of decision errors
Hypothesis Testing Concepts Enable You To ….
Key Terms H o : Null Hypothesis H a : Alternative Hypothesis P Value = Probability Value
Hypothesis Testing What is it for Statisticians ? H o : Mean Group A = Mean Group B H a : Mean Group A = Mean Group B H o : Slope of the line is 0 H a : Slope of the line is not 0 H o : Variance Group A = Variance Group B H a : Variance Group A = Variance Group B H o : Variable X is independent of Variable Y H a : Variable X is not independent of Variable Y
Hypothesis Testing What is it for the Average Person ? H o : Coding doesn’t matter for insurance claim reimbursement H a : Coding does matter for insurance claim reimbursement H o : Procedure X Avg. Cycle Time = Procedure Y Avg. Cycle Time H a : Procedure X Avg. Cycle Time = Procedure Y Avg. Cycle Time For one of your projects: H o = What is the Null Hypothosis? H a = What is the Alternative Hypothesis?
Fundamentals of Hypothesis Testing
Based on what we know, we form a hypothesis to explain something that we don’t know
Generally, this hypothesis takes the form of: Y=f(x1,x2...xk)
We gather data and devise a test to evaluate the hypothesis testing the effect of the x’s on Y
We assume that the null hypothesis is true
We then look for compelling evidence to reject this hypothesis
If we reject the null hypothesis, then we accept the alternative hypothesis
If we fail to reject the null hypothesis, then we have insufficient evidence to accept the alternative hypothesis
Hypothesis and Decision Risk State a “Null Hypothesis” (H o ) Gather evidence (a sample of reality) DECIDE: What does the evidence suggest? Reject H o ? or Fail to Reject H o ? Faced with two risks of making a wrong decision Type 1 Error = Alpha Risk Type 2 Error = Beta Risk Type 1 example: Not sending a denial balance to patient when you could = Alpha Risk Type 2 example Sending a denial balance to patient when you shouldn’t = Beta Risk
Fire Alarm Decision
A Practical Hypothesis Test No Alarm Alarm Sounds N O F I R E F I R E S T A T E O F R E A L I T Y Correct Decision Confidence 1 - alpha probability Correct Decision Power or 1 - Beta probability Type I Error alpha probability Type II Error Beta probability
How a Hypothesis Test Works – 2 Possible States of Reality
Either we have No Fire, or a Fire Exists.
Imagine that the smoke detector has a specified set point in terms of particles/cc
Null Hypothesis True, No Fire Detector Set Point Random distribution of particle counts in normal air Area to right of trigger is probability of committing an alpha error Area to left of trigger is confidence or 1 - alpha Null Hypothesis False, Fire Exists distribution of particle counts in Smoke - filled air Area to right of trigger is the power of the test 1 - Beta Area to left of trigger is probability of committing a beta error
After data is collected, statistical scores can be calculated – In Microsoft Excel or statistical software such as Minitab
H o is Rejected
H o is Not Rejected
If P is low, then H o must go!!! A probability (P) value is one statistic calculated to help determine if null hypothesis is true or false
Hypothesis Test Statements A) If p is low (less than or equal to alpha) reject Ho and make the statement: “ I am (1-alpha) sure Ha is true” B) if p is not low (greater than alpha) fail to reject Ho and make the statement: “ I have insufficient evidence to demonstrate Ha is true”
How Low Must P Be ? It Depends For most cases use .05
We would like there to be less than a 10% chance that these observations could have occurred randomly ( = .10 )
Five percent is much more comfortable ( = .05 )
One percent feels very good ( = .01 )
This alpha level is based on our assumption of “no difference” and a reference distribution of some sort
But, it depends on interests and consequences
Proving the Null Hypothesis
Minnesota Senate Election Results
H : Al Franken = Norm Coleman
Ha: Al Franken Norm Coleman
alpha = 0.05 (5% risk Factor)
If Vote in Minnesota allows rejection of H , they Project a winner.
If Vote does not reject H , they say...
Too close to call !
Hypothesis Tests - Healthcare insurance denial type
One Sample test Are Patient Responsibility denial balances within $100 or less? Null Hypothesis (Ho ): Patient Responsibility denials are less than $100 and therefore all PR denials received from Insurance Carriers should be auto billed to Patient Alternate Hypothesis (Ha): Patient Responsibility denials are not less than $100 and therefore PR denials received from Insurance Carriers should not be auto billed to Patient
Minitab - Output Ho: PR Denial = 100 Ha: PR Denial = 100 Should we auto bill patient for all Patient Responsibility Denials? A P-Value ! One-Sample T: Patient Resp Balance Test of mu = 100 vs not = 100 Variable N Mean StDev SE Mean 95% CI T P Balance 1828 277.4 509.4 11.9 (254.064, 300.803) 14.89 0.000
Other Tests / Charts will Confirm the Null Hypothesis is False Ho: PR Denial = 100 Ha: PR Denial = 100 The $100 target to auto bill patient is not within the confidence interval range
But…the data for our Denials study is non-normal A rule of thumb with Hypothesis testing is to understand whether the Data under study is from a Normal or non-normal distribution. Once again a Small P-Value (<.05) indicates that the Null Hypothesis In this case is false (The data = a normal distribution)
Therefore try a 1 Sample Non-Normal Data Test (to test medians vs. means) Ho: PR Denial = 100 Ha: PR Denial = 100 If P is low, reject Ho Wilcoxon Signed Rank Test: PR Balance Test of median = 100 versus median not = 100 N for Wilcoxon Estimated N Test Statistic P Median Balance 1828 1826 1243988.5 0.000 188.1 Normal Non-Normal Test for Mean 1 Sample T-Test 1 Sample Z-Test Example: (Ho: =100) Z- or T-Test (if n>25) Transform to Normal and Use Z Test Non-Parametric Tests 1-Sample Wilcoxon Signed-Rank Example: (Ho: Median =100) P Value < .05 True Mean (or Median) Does Not Equal the Specified Value
How do the 3 top Patient Responsibility denial reasons compare? Null Hypothesis (Ho ): Patient Responsibility denial reasons (1) Insurance coverage/record error; (2) secondary claim not received by Carrier; (3) denial is patient responsibility - have the same impact. Alternate Hypothesis (Ha): Patient Responsibility denial reasons have different impact Another PR denial scenario = Another study ! Why are we seeing PR type denials > $100
Test for Equal Variances
Are the variances for the 3 Patient Responsibility denial reasons the same or are they different?
Many statistical procedures, including analysis of variance, assume that although different samples may come from populations with different means, they have the same variance
This is a (usually) buried assumption of an Analysis of Variance. Performing this test will prevent you from making incorrect conclusions in certain circumstances.
In Minitab select: STAT>ANOVA>TEST FOR EQUAL VARIANCES “ No Claim” 2ndary claim not received by Carrier Coverage Error Bill Patient B for Normal L for non-normal
Other Graphical Representations
Let’s also look at the Main Effects Plot and the Interval Plot
These two plots provide different graphical representation of the differences between the three factors
Main Effects provides just the means
Interval provides means and different views of the confidence of those means
Let’s take a look at each...
The main effects plot highlights that higher PR denial balances result from a secondary claim not being received by the carrier. Root cause investigation required with primary and secondary carriers Main Effects Plot
Interval Plots An interval plot highlights the mean measurement and variability of the data Root cause investigation required with primary and secondary carriers Mean Confidence Interval – 95% certainty that this is true value of population mean
The Boxplot is another graph/method for looking at the data that may be easier to see differences in the distributions
Boxplots show the spread (variability) and center of the data
0% * 100% * * Not including any Outliers 1st Quartile 4th Quartile 2nd Quartile 3rd Quartile 25% 75% 50% (Median, not the Mean) Quartiles rank order the data from lowest to largest value Outlier
Boxplot Outliers Boxplot Outlier - Any data value that exceeds either the Upper Limit or Lower Limit as calculated below: UL = 3rd Quartile + 1.5 x ( 3rd Quartile - 1st Quartile ) LL = 1st Quartile - 1.5 x ( 3rd Quartile - 1st Quartile ) Maximum of 257.000 is greater than UL of 170.5 which is why there is an outlier identified at the top of the previous slide Descriptive Statistics: Balance Looking at Descriptive Statistics for Balance data Variable N Mean Median StDev SE Mean Balance 1507 56.992 55.9 30.798 0.793 Variable Minimum Maximum Q1 Q3 Balance 17.010 257.000 30.000 86.200 UL = 86.200 + 1.5 x ( 86.200 – 30.000 ) = 170.5 Anything greater = Outlier LL = 30.000 - 1.5 x ( 86.200 – 30.000 ) = -54.3 Any smaller value = Outlier Compare to Maximum & Minimum values to see if Outliers exist.
Prior analysis revealed “No Claim” denial reason was a key variability driver
No Claim = primary carrier partial payment received but balance did not transmit to secondary carrier
This was a surprise to the Billing & Insurance follow-up analysts leading to investigation of secondary carriers with “No Activity Since Filing” Denial Types
Back to the Top Patient Responsibility Denial Reason Investigation & Action: Medicare secondary claims not being received by Medicaid. Provider identifier and taxonomy code issues Action involved manual rebilling of claims to Medicaid
Can any of this type of analysis be applied in your areas?
Where can you use Hypothesis testing in your job or project?
Determine plan before Analysis and Execution
Multi-variance pre planning provides for:
Statement of Objective
List of Key Process Input Variables (KPIV’s) and Key Process Output Variables (KPOV’s) to be studied
Ensure Measurement Systems are capable
Sampling plan approach
Method of data collection
Team member involvement
Clear responsibilities assigned
Outline of data analysis to be performed
Key Multi-Vari Analysis and Execution Steps
1 . Collect data
2. Analyze data:
Is the process stable, in control?
Which are the key noise variables affecting the output variable?
Which are the key controlled variables that influence the output variable?
3. Investigate root cause and develop action plan
4. Implement improvement actions
5. Measure progress
6. Identify & prioritize key variables for Control Plan
The purpose of this project is to focus on reducing the number and amount of insurance carrier denials for claims submitted by Gundersen Lutheran Clinic for reimbursement.
Objective is two fold:
Reduce the incoming rate of “new” claim denials
Reduce the backlog of unresolved denials
Project Justification and Benefits :
The number and amount of Insurance claims denials have increased by more than 70% during 2007. Baseline dollar amount as of March, 2008 = $23 Million
Insurance claim denials result in bottom line financial impact with unresolved denials resulting into write-offs
Team Member Involvement and Data Gathering Approach
Team agreement to focus on gathering data to help
answer some key questions:
What are the sources of denials?
Which insurance carriers?
Which clinic departments?
What is the total impact of denials on Accounts Receivable?
What is the denial rate (both incoming and backlog)?
How much AR is tied up in denied accounts?
How much cash/margin is lost due to denial write-offs?
What is the resolution rate on denied accounts?
How quickly are denials resolved?
Which denial types are easily resolved?
Initial Data Challenges
Historical data unreliable – lack of tracking
Difficult to identify denials which were
Resolved by resolution efforts versus
No measurement system in place to track new denials
Poor categorization of denial reason codes
But practically it was clear that insurance claim denials activity was impacting business performance
Gundersen Lutheran Clinic: Annual Gross Revenue $800 M Accounts Receivable $120 M
Pareto Chart Initial attempt for denials prioritization indicated data source was adequate but needed refining. Had to spend time mapping multiple denial codes (“remark codes”) to a single denial reason in order to properly identify “20% of the problems causing 80% of the denial performance” American National Standards Institute (ANSI) Claim Adjustment Reason Codes Over 250 different industry denial codes plus other types used by Insurance Carriers
Denials Data Source Refinements Once properly capturing relevant data began to Pareto top denial reasons by: Insurance Carrier Types Insurance Carriers Departments Physicians Procedure Types Billing & Insurance Follow-up Staff
Mapping of Multiple Denial Codes Denial Reason = Coding Error
Same Approach for:
Provider Billing # missing
Lack of Prior Authorization/Pre-certification
Categorizing & Mapping the denial code data helped prioritize Prioritize by Denial Dollars and….…..
… .and by Denial Counts
Denials Backlog Problem Commercial Primary Gov’t Secondary Gov’t Secondary Commercial Primary Investigation revealed high dollar balances for primary carriers – lower count volume But can’t ignore lower dollar balances Lower dollar balances for secondary carriers – significant DAILY count volume
Denials Backlog Study
Denials backlog defined for Gundersen Lutheran
Claims partially paid or fully rejected by Insurance Carrier
Claims with No Activity Since Filing (no response from Insurance Carrier since 45 days of claim filing)
Denials requiring resolution by Billing & Insurance Follow-up staff
Carrier resolution (phone call, written appeal, rebills)
Transfer to patient responsibility
Denial is a valid write-off
For several reasons the backlog grew to unmanageable levels (process change did not follow organization / system changes)
Denials Backlog Study An initial study looked at denials volume by staff “worklists”
Notice any differences?
Number of denials by analyst
Hours spent per day on worklist
Supervisor or staff list
Insurance Carrier Type
Claim require coding
Billing number needed for Provider
1 2 3 4 5 6
Denials Backlog Study Then needed to understand the incoming volume of new denials Incoming Denial Dollars Coding Coding Provider Billing #
Denials Backlog Study Incoming Denial Counts Investigation revealed small dollar balances were filtered to a general supervisor worklist – quickly accumulated secondary claims and/or small balances not economical to pursue
Transitioned responsibility from Credentialing to Billing group
Streamlined front end requirements to gather necessary provider billing documentation from Human Resources & Clinic Departments
Realigned Coding staff responsibility for denials resolution
Action plans developed with top Commercial and Government carriers
Significant gap identified with cross over claims from Primary to Secondary carrier (Medicare to Medicaid system edit failure)
Top Commercial carrier transmitting high volume of general denial codes (Claim lacks information)
Implemented new measurement system for Denials activity tracking
Denials Management Tracking Model There was no baseline or tracking of incoming denials activity
Denial Reason: No Provider Billing Number Prioritization – Step 1
Denial Reason: No Provider Billing Number Prioritization – Step 2
Provider Billing Number Issue To implement process improvements for obtaining provider billing #’s had to understand the various reasons: Which reason is the primary cause for this type of Denial ? Billing # expired and needs renewal Billing # not obtained when hiring new provider Additional billing # not obtained when provider goes to new location
Provider Billing Number Can root cause of not having provider billing # in place be due to Different Insurance Carriers? Different Physicians?
Provider Billing Number Learnings
Emphasized need for improving front end processes versus all work on back-end resolution
Focusing on other upstream processes which link to a denial
Registration, insurance set-up errors
Prior Authorization / Pre-certification
Physician dictation and billing packet documentation
Coding of claims
Progress in 2008 Continue Control Plan monitoring in 2009 and Phase 2 actions Project start at April, 2008
The purpose of this project is to improve the process of billing for services rendered and properly capturing charges for nursing home type visits reimbursable by Medicare
Project Justification and Benefits :
The amount of unbilled services has grown to over $1 Million
An undetermined amount of charges for nursing home visits have not been entered into the clinical/financial system
Initial process review indicated gaps between clinical teams and the Billing group
Roles & responsibility changes
Limited baseline history of unbilled amounts but Clinical Director raised initial concern based on department financial reviews
Even with limited history for unbilled amounts data confirmed enough of a trend to signal an issue
Initial team meetings between clinical and billing teams quickly identified gaps with some simple Six Sigma Lean tools
Cause & Effect matrix prioritization, FMEA
Practical discussion revealed inputs into the system by clinical teams were being hung up in the system and not passed or visible to billing
Incorrect service types being selected by clinicians for nursing home visits
Lack of connectivity between Clinical and Billing teams
It was quickly determined what department to focus on for unbilled charges What is the primary source of unbilled charges?
Process error – Incorrect service codes selected for nursing home visits
Progress Report Oh, oh ? Actually result of process cleanup. Along with unbilled amounts discovered charges not yet posted (incremental revenue for charges booked at month end) Increase in unbilled amounts caused delay in charge entry
Hospice Billing & Charge Capture Key Learnings
Co-location of charge entry/billing analyst with Clinical team
Proper system security access for clinical and financial teams
Education to clinicians on service code entry and mistake proof of system to flag for incorrect codes
The purpose of this project is to implement a new process to properly support the charging of Intravenous (IV) Solutions as dispensed from Pyxis med stations
Project Justification and Benefits :
Hospital operations are transitioning to the EPIC platform for the inpatient record and inpatient order entry portion of Gundersen Lutheran’s overall electronic health record in November, 2008
A process for charging IV Solutions needs to be implemented in advance as part of EPIC readiness deployment and to ensure revenues are captured during this interim period
Primary Process Change Nurses administering IV solutions must capture the record in the Pyxis med station for charging IV Charging Process IV inventory usage Remote stock of IVs Nurse Pyxis Med station Pink Sheets Charge capture IV billing Compliance rate of usage vs. billing Inputs Outputs
Creation of reports from different source systems to implement a measurement system for tracking IV charge compliance
Inventory usage report
Pyxis med station billing report
Invision billing report (for operating units not utilizing Pyxis)
Manual tracking of “Pink Sheets” for operating units without Pyxis or Invision
Monitored initial weeks of implementation and targeted additional education and support needed for operating units / departments Any differences by Department? Compliance Minimum Target Rate = 50%
Targeting some Key Input Variables
Could be improved
With some effort:
For departments without Pyxis med stations improve manual method of capturing charges on “pink sheets”
Use Nurse Education to assist specific departments
Some departments used a stamp method as reminder
Progress Report Objective: Increase compliance rate prior to EPIC deployment
Week 1 compliance rates by Department Final Week compliance rates by Department
Mike O’Neill Efficiency Improvement Leader Gundersen Lutheran Health System Mike O’Neill is a Master Black Belt, Efficiency Improvement Leader for the Gundersen Lutheran Health System in La Crosse, Wisconsin. Mike joined Gundersen in March, 2008 after spending 23 years in industrial manufacturing with Trane, an Ingersoll-Rand Company. Mike became a certified Black Belt and Master Black Belt during his tenure with Trane. He was the Six Sigma Leader for the commercial global finance team and led multiple transactional projects involving the order to cash cycle. His last assignment at Trane was Global Customer Quality Leader having responsibility for all warranty processes and policies, collecting customer quality information, establishing customer focused metrics, and timely claim resolution. Since joining the Healthcare industry Mike has been leading projects and mentoring project leaders in the application of Six Sigma in areas of revenue charge capture and billing process improvement. Mike has a bachelor’s degree in business administration and economics from the University of Wisconsin-Stevens Point and a master’s degree in business administration from University of Wisconsin-La Crosse.