Benchmarking For Best Practice

5,766 views

Published on

A review of the benchmarking process for best practice. Another tool for continuous improvement.

Published in: Business, Technology
0 Comments
8 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
5,766
On SlideShare
0
From Embeds
0
Number of Embeds
23
Actions
Shares
0
Downloads
317
Comments
0
Likes
8
Embeds 0
No embeds

No notes for slide
  • Benchmarking can be internal or external and in many cases we do both
  • The benchmarking model is based on the U.S. Navy model. It is orderly comprehensive and easy to use. It is similar to the 12 stage methodology by Robert Camp that consist of:1. Select subject ahead, 2. define the process, 3. identify partners, 4. identify data sources, 5. collect data ad select partners, 6. determine the gap, 7. establish process differences, 8.target future performance, 9. Communicate, 10. Adjust goal, 11. implement & 12., Review/recalibrate. There many other benchmarking methodologies being currently used.Another example is from “Assessment of Benchmarking within Government Organizations, Mark Howard & Bill Kilmartin published in Accenture, p. 15, May 2006. These authors talk about a six stage process: Define functional scope, develop standard methodology & timeline, develop team structure, roles, and representation, develop standard definitions, data collection guidelines, determine levels at which to collect data and decide on data to gather and questions to be asked.
  • The foundation of a strong team is strong membership. People to committed to making the tem a success. Some specific skills required include: taking responsibility for the success of the team, follow through on commitments, contribute to discussions, actively listen to others, get your message across and being willing to accept feedback.Getting started includes addressing the following issues early on in the team formation:Establishing an agreed upon purpose - make certain that this purpose statement is always visible to the team. It will help them keep on track with respect to their goals.Identify both internal and external stakeholders. Those who can support, influence and will be affect by the work of the team. It is important to keep them in the loop of team activities.Identify the expectation and limits of the team work. Understand what is expected of you, and your team from yourself, stakeholders and the leadership of your organization.Establish agreed upon ground rulesDecide team logistics: when where and how often will the team meet. Have a well-defined purpose for each meeting, as well as, address the site of the meeting, the supplies and equipment required, the time of the meeting, and the individual(s) responsible for providing the facilities and supplies,, and recording & keeping the minutes of the meeting. Groups, like individuals, go through stages of growth and development, and they need to operate in ways that maximize their chances for task success and minimize chances of failure. Norms facilitate group success, simplify or make predictable the behavior of group members, and help the seminar avoid or overcome embarrassing or damaging interpersonal problems. Norms express the central values of the group and clarify what is distinctive about the group’s identity. Group norms play a large role in determining whether or not the TEAM will be productive.
  • Clear vision is a characteristic of successful teams. Knowing where you are going and how you are going to get there.Full participation means that everyone contributes to their full potentialClear communication between team leaders and team members is essential – regular meetings, information sharing is importantProcess improvement plans – teams periodically review how they are doing and make changes to improve their processesClearly defined roles – everyone knows what is expected of them and what they should be doing and whenCollaborative behavior – people support one another and work interdependently to achieve the overall goal
  • The are formal and informal roles, as well as formal and emerging leaders. Many task are done informally. People just volunteer to do a particular kind of effort, or keeping notes. Usually it works best for your team to be remain flexible in how it divided up its work. Champion – (Oversees and supports the BM efforts to help ensure alignment with stated goals and objectives Team leader (educates team members about team purpose & limits, tracks the team’s goals, and achievement,, anticipate & response to change, help the team members to develop skills, communicates to the entire organization, removes barriers to team progress, helps resolve conflicts, manages logistics).Team member(focuses on the purpose of the team, thinks about the team success as a whole, listens more than talks, communicates clearly, keeps commitments.Facilitator (provides training as needed, helps the team deal with conflict, coaching the team leader and member on skills, help the team use basic problem-solving principles & tools, lead team meetings when difficult subjects are being discussed).Senior management representative (serves as a senior advisor and communication conduit between the team and the organization leadership). Information manager (a team member who assist the team leader with establishing both internal& external communications on the teams activities & progress). Technical expert (the individual or individuals that have a working knowledge, understanding, hands-on experience with the process, procedures, or technology related to the BM study)
  • If the internet is used, care has to be taken with information obtained – check facts and find original sources if possible. For specialized information access to computerized library databases can be invaluable for beginning to accumulate information.Interviews and focus groups (really a group interview) are a good way to get information from the relevant sources. Interviews provide a context and allow you to do follow ups on new information that you might receive. Interviews should be planned out with respect the major questions that are asked and they should be done consistently so that the same areas are covered in each interview.Archival data are records that are kept by organizations. For example, Kilowatt hours in a plant might be recorded on a monthly weekly or daily basis. Archival data can be extremely useful for benchmarking if it can be obtained but should meet the standards of reliability and validity discussed earlier. In some cases surveys may have been conducted by consortia or other organizations that can be adapted for use in benchmarking.For some benchmarking observations can be a useful way to understand how a particular process works. Observation can result in the collection of data by the observer and can also be combined with a follow-up interview.Surveys involve asking structure questions that are carefully designed and selected. Most useful for collecting large amounts of data efficiently.The metrics used will depend on the benchmarking project but relevance, feasibility, reliability and validity should all be considered.
  • Reliability refers to the consistency of answers that we get from a specific metric. If for example, we measure something one month and then measure it again a month later we should get similar answers if the metric is reliable. Unreliable metrics can provide us with meaningless data at best and at worst misleading data. Validity is a property of a metric that has to do with whether it measures what we really want to be measuring. For example, we might use speed and budget to measure the performance of new product development processes – what we really want to measure is how successful the product will be in the market. For example, in shipbuilding, simply using gross tonnage to calculate efficiency may be misleading because efficiency depends on the type of ship being build so there is a “compensated gross tonnage” formula that allows a computation of efficiency – e.g., man hours per compensated gross tonnage.
  • Consider the following problem. A benchmarking study seeks to identify best practices in energy management in manufacturing. The metric to be used for the study is energy use intensity (EUI) which is a measure of Kwh/sf/month. The figure shows the measurements for five different plants over 3 months. Based on the table one would conclude that Plant E has the lowest energy intensity but an examination of the reliability of the metric can be done using analysis of variance. The estimated reliability of the EUI metric for the five plants is 0.22, which means that most of the variation in the EUI scores is error, or random fluctuations in the month-to-month measurements. The scores are not reliable enough to draw any conclusions.  An acceptable reliability is .70 or higher.
  • These are all ways that we can ensure that our metrics are valid. If reviewed by experts they should agree that it measures what we want it to measure. It should also have a strong relationship with other metrics that might be expected to measure similar results. It should have very weak or even negative relationships with metrics that are measuring something very different and it should be free from confounding – an example might be the age and type of equipment used in an energy study of plants – energy consumption could be related to age and type of equipment rather than other practices.
  • Selecting the sample of organizations to study or to partner with is an important early step. There are several considerations for how samples are selected. Each of these considerations will be discussed
  • The sample should represent a good mix of organizations that represent the process or product that you want to benchmark. Also choose organizations that are comparable with respect to what is being benchmarked – this does not necessarily mean the same industry.Important to include organizations that vary in their practices and also get a range of performance. Otherwise you might not discover what differentiates, best from what everybody does.
  • Nearest neighbor bias occurs when benchmarking team goes to the most readily available organizations. It is unlikely that you will learn anything new since you probably are all doing something similar – need to expand the frame of your sample beyond those you already know, if possible.
  • If we select only highly successful firms we may not learn what we need to know. If you study successful organizations and they all sing the company song before beginning work you might conclude that company songs are the key to success but it may be that unsuccessful firms also sing the company songs. Without studying unsuccessful firms and understanding what differentiates between success and failure you have not reached any useful conclusions. They key is in having a range of performance on whatever metrics or outcomes are being studies so that the key variables that cause the difference can be understood. Without both high performing and average or low performing organizations to study, it will be difficult to draw clear conclusions.
  • During World War II the statistician Abraham Wald was asked to assess the vulnerability of airplanes to enemy fire. All the available data showed that some parts of planes were hit more disproportionally more than other parts. The recommendation by the military was obvious: reinforce the parts that were hit most often. Wald, however, reached the opposite conclusion. He recommended that the parts hit least often should be reinforced. Why did he reach this conclusion? As a statistician, Wald recognized that sampling bias due to selection was occurring. All of the data were based on the planes that returned. The planes that were shot down could not be studied so it was likely that the areas seen in the planes studied were not the critical areas that resulted in the loss of a plane. A second example is provided by research on entrepreneurs. Studies show that successful entrepreneurs have two consistent traits: they persist despite failures and they are able to persuade others to join them. Unfortunately, when unsuccessful entrepreneurs were studied they had exactly the same traits.
  • This error occurs when we see some particularly vivid events – we tend to over generalize. An example is a new technology that produces very high savings in two organizations. We need more data to verify that it is really the new technology that is creating the savings.
  • We have a need to explain what we see and when two things co-occur we look for a ready explanation. Innovative companies may be innovative because they hire really smart people not because they have a particular strategic planning process.
  • An example: having a six sigma program is correlated with more efficient energy consumption. The six sigma may not be causing more efficient energy consumption but there may be other practices that are associated with paying attention to quality that cause the improvement.
  • Confirmatory bias is one of the most difficult biases to eliminate. Once we accept an idea or belief, we tend to seek out confirmatory information and avoid disconfirmatoryevidence. In Benchmarking research, this bias can cause premature acceptance of a practice as a “best practice” when more research might provide evidence to the contrary.
  • Groupthink occurs when a group reaches consensus without challenging or critically evaluating conclusions. This is particularly likely to occur when the group is cohesive and confirmation bias is present.
  • The hidden profile problem occurs when one or more members of the group have information that is unshared with other members. This can prevent an optimal solution to a problem such as identifying problems with a proposed best practice.
  • Determining performance gaps assumes that you fully understand your own organizations processes and have reliable and valid information on the same metrics for best-in-class organizations. Gap analysis should be forward looking. That is you should analyze not only your current gaps but gaps that you expect in the future.
  • Gap analysis can help to pinpoint where you need to focus your attention and help begin to identify the processes and resources necessary to close the gap.
  • Benchmarking For Best Practice

    1. 1. Michael Barger<br />Benchmarking For Best Practice<br />
    2. 2. What is Benchmarking?<br />A tool for continuous improvement<br />Compliments Lean and Six Sigma activities<br />Lean and Six Sigma projects develop improved processes<br />Benchmarking finds best processes already developed<br />
    3. 3. Benchmarking’s Roots<br />The term “benchmarking” was originally a land surveyor’s term used in topographical surveys<br />A benchmark was a distinctive mark made on a rock, a wall, or building that served as a reference point in determining one’s current position and altitude <br />This was a sighting point from which additional measurements could be made<br />
    4. 4. Definitions<br />Benchmarking: A strategic and analytic process of continuously measuring an organization’s products, services and practices against a recognized leader<br />Best Practices: A process or a practice that causes performance as measured by a specific metric to be superior<br />
    5. 5. Best Practices Benchmarking<br />Best Practice Benchmarking is straight forward for an organization that already uses metrics to track processes<br />Best Practice Benchmarking can be used instead of a Six Sigma Project or can compliment one<br />
    6. 6. Types of Benchmarking<br />Process benchmarking - investigates business processes<br />The goal to identify the best practices from other benchmark firms<br />Activity analysis used to benchmark cost and efficiency<br />Often applied to back-office processes <br />Financial benchmarking - performing comparative financial analysis to assess overall competitiveness and productivity<br />Performance benchmarking - competitive analysis by comparing products and services against target firms<br />
    7. 7. Types of Benchmarking (continued)<br />Product benchmarking– focused on product improvement<br />Can involve reverse engineering which is taking apart competitors products to find strengths and weaknesses<br />Strategic benchmarking - observes how others compete <br />Usually not industry specific<br />Best to look at other industries<br />Functional benchmarking - focus on a single function to improve its operation<br />Complex functions such as human resources, finance, accounting and IT are unlikely to be directly comparable<br />May need to focus on sub-processes to make valid comparison<br />
    8. 8. The Four Phase Benchmarking Model<br />Phase I: Plan<br />Phase II: Do<br />Phase III: Act<br />Phase IV: Track<br />
    9. 9. I: The Plan Phase<br />
    10. 10. The Plan Phase<br /><ul><li>Select the process to be benchmarked
    11. 11. Select and prepare the benchmarking team
    12. 12. Identify benchmarking partners
    13. 13. Research information sources for best practices
    14. 14. Rank potential partners [1-50]
    15. 15. Select the final partners [1-5]
    16. 16. Know & use the benchmarking guidelines for ethical conduct</li></li></ul><li>Establishing a Benchmarking Project<br />Motivation<br />Team selection and building<br />Framing the problem<br />Select the process to be benchmarked<br />Senior management buy-in<br />Resource allocation<br />Identify benchmarking partners<br />
    17. 17. Motivation<br />Justification: Where is the pain?<br />Resources<br />Return on investment<br />Result of current path<br />
    18. 18. Building the Team<br />Establish the team and the role of the team to align with the overall goals and objectives<br />Identify formal and informal roles<br />Team building process<br />
    19. 19. Team Building Process<br />Careful member selection (skills, motivation, personality, availability)<br />Establish Ground Rules: The team’s role and responsibilities <br />The organization’s role and responsibilities<br />Other key issues in the team building process<br />Develop a Team Charter<br />Adopt the characteristics of a high performing team<br />
    20. 20. A High Performing Team<br />Clear vision<br />Full participation <br />Clear Communication<br />Process improvement plan<br />Clearly defined roles<br />Collaborative behavior<br />Well-established Decision Process<br />Established Norms<br />Team Management Process<br />
    21. 21. Team Introductions<br />Name<br />Job title & position<br />Professional/Career Goals<br />General Background information (hobbies, marital status, family information, sports etc.) <br />Knowledge/experience with benchmarking activities<br />Two-minute interviews & briefings by colleagues<br />
    22. 22. Formal Roles<br />Champion<br />Team leader<br />Team member<br />Facilitator<br />Senior management representative <br />Information manager <br />Technical expert<br />
    23. 23. Team Process Roles<br />Team Leader<br />Facilitator<br />Scribe<br />Timekeeper<br />Roles can be rotated<br />
    24. 24. Team Documents<br />Team Charter<br />Working Agreement<br />Code of Conduct<br />Team Roles and Processes<br />Communications Plan<br />
    25. 25. Communication Plan<br />Communication Plan is essential<br />Establishes roles and responsibilities<br />Addresses both internal and external information<br />Documents how information will be communicated<br />(meetings, issues, deliverables, resources, project status, reports, etc.)<br />Documents to whom information will be communicated<br />
    26. 26. Communication Plan (Continued)<br />Informs the leaders and others on what to expect in terms of how the teams BM project may impact other projects-efforts within the organization<br />Creates an a sense of shared commitment & ground rules within team<br />Encourages a culture of openness, trust and acceptance, and provides an opportunity to exchange information, ideas and information in a timely manner<br />
    27. 27. Frame the Problem<br />Develop a problem statement<br />Use an Ishikawa Diagram or 5 Why methods<br />Establish a root cause<br />Avoid presupposing the solution<br />Establish a clear vision of the outcome<br />Build consensus<br />
    28. 28. Stakeholders<br />Identify who the stakeholders are in your organization<br />Reach a group consensus<br />From the group consensus identify which stakeholders will support your benchmarking team<br />Plan to brief these individuals on the approach, progress and the benchmarking findings at critical milestones<br />These individuals can help determine the success of the benchmarking effort. <br />
    29. 29. Develop an Information Briefing<br />Vision<br />Plan<br />Methods<br />Expected results<br />
    30. 30. Selecting Benchmarking Partners<br /><ul><li>Research information sources for possible partners
    31. 31. Dun and Bradstreet
    32. 32. Business lists
    33. 33. Personal knowledge
    34. 34. Current business partners
    35. 35. Rank potential partners [1-50]
    36. 36. Establish selection criteria
    37. 37. Poll or survey the potential partners
    38. 38. Score information against selection criteria
    39. 39. Select the final partners [1-5]</li></li></ul><li>Partner Selection Criteria<br />Sample criteria for identifying benchmarking partners:<br />Is the benchmarking partner comparable financially (similar revenues, sales, profits)?<br />Is the benchmarking partner comparable size (similar number of employees)?<br />Does the partner engage in comparable functions (similar work process, methods, practices)?<br />Does the partner have comparable outputs (similar products & services)?<br />
    40. 40. Partner Selection Criteria (Continued)<br />Does the partner have similar customer expectations?<br />Does the partner have comparable organization and divisional structures?<br /> Does the partner have comparable inputs (supplies & ingredients)? <br />Is the partner part of a comparable market sector (public, private, government)?<br />Does the partner have comparable logistics (similar set-up & workflow)?<br />
    41. 41. II: The Do Phase<br />
    42. 42. The Do Phase<br />Determine the data collection plan<br />Collect and analyze the data<br />Train team members in skills & tools as necessary<br />Determine performance gaps and strengths<br />Analyze performance gaps & strengths<br />Produce a benchmarking report <br />
    43. 43. How will Data Be Collected?<br />What metrics will provide the data to compare current and benchmarked processes?<br />What kind of information or measurement is necessary? (accuracy, quality, customer satisfaction, speed, etc.)<br />What data exists on the internal process?<br />How should the team collect the internal data?<br />Who on the team will collect the data?<br />
    44. 44. How will the Data Be Collected? (Cont.)<br />How will the team check its results?<br />How much time will be needed to collect the data?<br />How will the data be consolidated and analyzed?<br />How will the satisfaction of the stakeholders be evaluated during the process?<br />What method(s) should be used to collect data from the benchmarking partners?<br />
    45. 45. Research Methods<br />Internet-based information searches<br />Interviews/focus groups<br />Archival data<br />Observation<br />Surveys<br />Selecting and establishing metrics<br />Internal experts<br />
    46. 46. Some Examples of Metrics<br />Productivity, by transactions per unit<br />Accuracy, by the error rates<br />Speed, by the cycle time<br />Product stability, by the number of engineering change orders per month<br />Process financial contribution, by the value-to-cost ratio<br />Product availability, by fill rate<br />Product quality, by first-pass yield<br />Capacity, by volume managed<br />Service, by the on-time delivery<br />
    47. 47. Considerations for Metrics<br />Reliability - consistent measurements<br />Validity – measuring the right things<br />
    48. 48. Reliability of Metrics<br />EUI* for Five Manufacturing Plants for Three Months<br />*Energy Use Intensity (Kwh/sf/month)<br />Note that the best plant cannot be determined. Analysis of Variance indicates that most of the variation is due to measurement error.<br />
    49. 49. Validity of Metrics<br />Does it make sense to the experts?<br />Does it have a strong relationship with other metrics that are similar?<br />Does it have weak or dissimilar relationships with metrics that are different?<br />Is it free from other influences that might be causing the results?<br />
    50. 50. Sample Considerations<br />Sample Selection<br />Nearest Neighbor Bias<br />Success Bias<br />Selection Bias<br />
    51. 51. Sample Selection<br />Sample should be representative<br />Processes should be comparable<br />Should represent a range of practices<br />Objective: Discover what differentiates best from the rest<br />
    52. 52. Nearest Neighbor Bias<br />“There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns -- the ones we don't know we don't know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones.”<br />-Donald Rumsfeld<br />Look beyond organizations that you already know about. <br />Include organizations that are unfamiliar, but relevant, if possible.<br />
    53. 53. Success Bias<br />Avoid selecting only highly successful firms<br />A range of success is needed<br />Differentiation is the key<br />
    54. 54. Selection Bias<br />We are often unaware that we are sampling the survivors<br />This produces a biased sample <br />The results may be misleading<br />Finding results for organizations that did not survive can help understand what is differentiating a best practice<br />
    55. 55. Data Collection<br />Establish the purpose for the data<br />Define the type of data to be collected<br />Determine when and how will data be collected<br />Determine who will collect the data<br />
    56. 56. Control Charts<br />Multi-attribute Utility Analysis<br />Triangulation<br />Errors to Avoid<br />Check Sheets<br />Histograms<br />Pareto Analysis<br />Scatter Diagrams<br />Fishbone Analysis<br />Process and Tools <br />
    57. 57. Some Pitfalls to Avoid<br />Small sample bias<br />Attribution bias<br />Correlation vs. causation<br />Confirmatory Bias<br />Groupthink<br />Hidden Profiles<br />
    58. 58. Small Sample Bias<br />Conclusions drawn from a small sample of observations<br />The observations may be outliers or random fluctuations<br />Larger samples are needed to draw stable conclusions<br />
    59. 59. Attribution Bias<br />An incorrect conclusion that A is causing B because it stands out<br />Example: innovation is attributed to a strategic planning process when the planning process has little to do with it<br />Collecting more data and seeking the root causes can help avoid this bias<br />
    60. 60. Correlation vs. Causation<br />Two variables are related but A does not necessarily cause B<br />Need to have a better root cause explanation than simply observing the relationship<br />Seek out a clear rational explanation for the relationship to ensure causality<br />
    61. 61. Confirmatory Bias<br />Once an initial idea is accepted we tend to seek out only confirmatory information<br />We avoid or disregard disconfirmatory information<br />Can lead to invalid conclusions<br />Facilitator and team should be aware of this tendency<br />
    62. 62. Groupthink<br />A feature of groups, particularly when they are cohesive<br />Need a “devil’s advocate” to challenge too ready acceptance of ideas and conclusions<br />Self-censorship and majority view does not necessarily indicate unanimous consensus<br />
    63. 63. Hidden Profiles<br />Team members often bring different information to meetings<br />In many cases not all of the relevant information is shared<br />Good facilitation can overcome this bias<br />
    64. 64. Determine Gaps and Strengths<br />Analyze the gaps/strengths in your current business process against your benchmarking partners and determine areas to target for improvement<br />Do a performance gap analysis with a detailed comparison of the current process to the best-in-class<br />List the best practices and the strengths where benchmarking partners display superior performance showing parity where there are no significant differences<br />
    65. 65. Determine Performance Gaps (Cont.)<br />Describe where your internal practices are superior to the benchmarked partners<br />Produce the analysis necessary for the benchmarking report and prepare to make recommendations to the process owners based on that analysis<br />Determine reasons for the gaps projecting any future competitive gaps<br />
    66. 66. III: The Act Phase<br />
    67. 67. ACT Phase - Implementation<br />Develop an Action Plan for implementation<br />Communicate the Action Plan<br />Execute the Action Plan<br />Recalibrate benchmarks<br />
    68. 68. Transfer/Implementation<br />Strategically and tactically plan how you will incorporate best practice within your organization and record, analyze and evaluate findings<br />
    69. 69. Implementation & Transferability Plan<br />Build awareness and communication<br />Ensure that there are sufficient resources <br />Build relationships between the source and the recipient of the knowledge<br />Assure Timely implementation<br />Link to a strategic plan<br />
    70. 70. Develop an Action Plan<br />Plan How to:<br />Achieve desired results<br />Measure the results<br />Monitor feedback on the process changes<br />Identify tasks necessary to implement the process changes<br />Identify necessary training<br />
    71. 71. Obtain Stakeholder Approval <br />The following should be asked before implementation:<br />Is the plan clear?<br />Does the plan show how gaps in performance will be closed?<br />Will it lead to sufficient changes?<br />Has the management supported the team and recognized its accomplishments?<br />Is the organization ready to implement successfully the changes to the process?<br />Is the process in place to implement the plan?<br />Are there sufficient resources for the plan? <br />
    72. 72. Implementation of Best Practices<br />Build awareness and communication<br />Ensure sufficient time and resources <br />Build relationships between the source and the recipient of the knowledge<br />Adhere to a suitable schedule<br />Strategically and tactically plan the implementation of the best practice<br />Record, analyze, and evaluate findings <br />
    73. 73. IV: The Track Phase<br />
    74. 74. Track Phase – Monitoring the Implementation<br />Review the Effectiveness of the Best Practice Implementation<br />Modify the Action Plan as Necessary to Improve Implementation<br />Review the Effectiveness of the Best Practice<br />Continually Monitor Means for Improvement<br />
    75. 75. Evaluation of Implementation<br />Follow-up Surveys<br />Quasi-Experimental Designs<br />Single-group interrupted time series design<br />Interrupted time series design with control groups<br />
    76. 76. Recalibrate Benchmarking<br />Need for advancement still exists <br />Reset targets<br />New values become internal measurements for the next benchmarking effort <br />Establish a new process baseline <br />Continue to monitor your current best practice against others <br />
    77. 77. Monitor the Benchmarked Process <br />Confirm strategic alignment<br />Continually monitor customer satisfaction<br />Determine if any additional world-class process has emerged<br />
    78. 78. Repeat the Cycle<br />Conduct ongoing visits with benchmarking partners <br />Consider recalibrating annually <br />Perform the steps of the benchmarking model <br />

    ×