By the Numbers: Do Hennen, LJ Ratings and Other Scores Tell the Truth About Your Library? What makes a library good, or great? How can your library measure its effectiveness? What does success look like? How does a library compare to its peers? This program is an introduction to the benchmarking process, the use of data to assess and compare library performance. We will look at some pre-packaged comparison tools, the Hennen Rankings and LJ Ratings, and some other resources for benchmarking, and present some of the pros and cons of the various tools. Case studies are sprinkled throughout this talk, but although they are based on true and real-life situations, we have changed names and locations “to protect the innocent.”
The trustees of Washington Park Public Library were agitated one fall when a nearby library, Lincoln Park, obtained a lot of local press. Lincoln Park had fared very, very well in the newly released Hennen American Public Library rankings. Local newspapers and tv stations covered the library press conference, where the library announced it was “the most used library in the region.” Over at Washington Park, the competitive juices started flowing. The trustees wanted their share of press attention and public and private dollars and was afraid they would lose out. They became obsessed with their lower rankings. The staff were surprised and upset about the fact that trustees were thinking service at Suburban was inferior because they worked so hard to do well, and felt tired and exhausted. Everyone wanted to know: What makes Lincoln Park better than Washington Park? What is Hennen and what does it mean? How can they brag about being the “most used?”
When Washington Park looked at their Hennen numbers, they were performing about 15% below Lincoln Park’s circulation, visits, and program attendance. But they went a little further and looked at some things that were not in the Hennen formula. Using the IMLS data, they found that they were open about the same amount of hours but had 20% fewer staff. This data allowed them to build a case with the trustees and elected officials: with more staff we can actually get books checked in and back on the shelf in the same day, instead of 3 days later, and process new materials faster. Although the local government had little money to spare for additional personnel, they accepted a long-range plan to add staffing hours each budget year. A library should not wait until a situation like this to benchmark. Why should you benchmark or do comparison reports? Neutral evidence Advocacy – elected officials don’t want to be embarassed by comparisons Review of comparative performance data builds library success. Accountability for public and private dollars given to the library. Quality improvement Identifies use patterns and needs that can be used for fundraising, grants, and advocacy Marketing Find peer libraries for deeper benchmarking, best practices, and information exchanges Identify staff training needs Use when: Staff or board are complacent or self-satisfied Staff or board have poor self-image or don’t recognize accomplishments
An independent city library on the East Coast was getting ready to prepare a strategic plan. The trustees were very interested in having a board retreat to discuss the future, and to have a phone survey of residents to get feedback on customer service. They were, however, reluctant to spend any effort benchmarking or looking at data comparing their library’s statistics with neighboring ones. “We only care about what our residents think,” said the board president. “We’re different from the surrounding communities and what their libraries do is of little interest to us.” They glossed over their relatively low Hennen and LJ Ratings saying that they “favored big systems and libraries run by ALA head honchos.” Meanwhile, the library had several challenges: an aging building, a rapidly growing immigrant and ESL population, decreasing funding, and dissent among trustees and elected officials about the future of the library which required attention in the strategic plan. Benchmarking would be an important activity as this library responds to community changes. If it’s so beneficial, why don’t we do it? “ That doesn’t have anything to do with us….we don’t care what others do” Equate comparison with “competition” with other libraries and they feel competition is not appropriate We wrongly think benchmarking is not necessary because each library is a local entity to be defined and evaluated only in its singular context Provincial, micro-view of their challenges and operations Some staff and trustees don’t know benchmarking benefits or processes; not sure how to do Takes time, there’s a cost for some of the services Fear of results, especially if obligated to share with elected officials or the public. Knowledge that benchmarking has flaws and problems Concern that available benchmarking stats lag far behind, 1-2 years, to be of immediate use
If time prevents benchmarking, the least a library can do is look at its Hennen and LJ scores. The rating systems both have generated much debate. Statisticians and librarians argue over the formula and the meaning of the rankings. High-ranked libraries celebrate their success and low-ranking ones agonize over the reasons behind their numbers. Ten years after the first publication of the Hennen Ratings, hundreds of media stories have referenced the Hennen Ratings, the author has sold thousands of his detailed reports, and the website is heavily used. American Libraries ’ editor, Leonard Kniffel, wrote recently “I have never fielded as many media inquiries over an AL article as the Hennen ratings.” So what should we make of these systems? Here’s a closer look.
Hennen’s American Public Library Ratings are administered by Thomas Hennen, a Wisconsin library director who first published his formulas and ratings in 1999 in American Libraries . He was an independent researcher looking for some way to compare and benchmark libraries in a meaningful manner. Hennen uses the data you submit in your state annual report, that is forwarded to the Federal Government. It is now received and processed by the federal Institute for Museum and Library Services (IMLS) and the data is published, unfortunately, 1-2 years after you submit it, much like census data takes time to be processed. Hennen groups libraries by population served and then looks at 15 inputs, such as funding and staffing, and outputs such as circulation and visits per capita that have varying weights in the formula, producing a final numerical score. You can find the scores of libraries by state on Hennen’s website. He also, for a fee, provides a customized report comparing your libraries to others with some additional data not found on the website. There has been much debate and controversy over the weights and inputs and outputs used. American Libraries decided in 1999 that the rankings would be “useful to the libraries who came out on top” and have published them ever since, because they give a snapshot, however incomplete, of how a library stands next to others. Our position is that Hennen can give you an imperfect but useful gauge of your library’s performance, compared to others, that can identify strengths and point out areas worthy of additional study. And the additional study is what points to possible courses of action.
For example, here are 3 public libraries in a midwestern state that serve populations of 25,000 to 49,999. Bacon Memorial is really puzzled as to their low scores. Looking at the statistics, Hotwings has much higher circulation than the others, which makes them more successful in the Hennen ratings, because the formula includes circ per capita/hour/visit. And how does Hotwings get that high circulation? Materials % of budget and materials $ per capita were close. What can it be? Here’s the answer: The ILL librarian noted different loan periods, which gave Hotwings 3 circs within 6 weeks as opposed to 2. Bacon decided that their customers would be very inconvenienced by the 2-week loan period, but had a trial period where customers were allowed to have a 2 nd renewal, as long as the items was not on reserve. Circulation went up 7% during the trial period and the 2 nd renewal became a permanent part of their public service strategy. More important than the numbers was the customer satisfaction from the minority of borrowers who wanted that 2 nd renewal.
Another rating system exists to provide a snapshot comparison. Earlier this year, Library Journal announced a “better than Hennen” rating that would identify “America’s Star Libraries.” The LJ Index, authored by librarian/statistician Ray Lyons and Keith Curry Lance, groups libraries by total operating expenditures instead of population, and uses 4 outputs from IMLS data weighed equally : circulation, program attendance, visits, and public computer use. Unfortunately, in rolling out their system, they criticized Hennen harshly and started an unproductive volley of accusations. Lyons and Lance accused Hennen of “combining and weighing so many variables…the rankings obscured the most important measure of all: public service.” Hennen countered that only 4 output measures, without any inputs or financial figures, make LJ Ratings a weak assessment. We have some concerns about ranking libraries based on only four variables but support the authors’ own words: “this should be one among several sources of information…decide how to incorporate it into a more comprehensive assessment process.”
Neither Hennen or LJ don’t, and can’t, measure customer service, leadership, management,, currency and breadth of collections and other factors that play a role in a library’s community success. In Hennen’s first annual article, he admitted “data measurement cannot capture a friendly smile and warm greeting…nor the excitement of a child at storytime.” Ray Lyons admits libraries are rated as “excellent only based on a very arbitrary definition…the ratings yardstick is very, very crude..” “subjective and arbitrary.” The proof of that is that many winners of the Bill & Melinda Gates Foundation and Library Journal “ The Best Small Library in America ” have ratings in the bottom half, including one winner from our own state. MILANOF-SCHOCK LIBRARY MOUNT JOY 10 K 454 41%
There are some patterns and correlations among successful libraries in both tools: Lyons and Lance point out, in the LJ article, “the location of a library community or demographics can have a dramatic impact on its service levels.” This is QUITE an understatement. If anything, we have found in doing benchmarking over the years that community demographics is ½ of library output performance. We believe that you could put a welll-managed library in different settings and see very different outcomes. Most of the highest rated libraries are well-funded, and in communities with residents of high educational level and/or income. They include wealthy suburbs and beach resorts. They tend, therefore, to have better funded libraries with more buildings, bookmobiles and hours open—leading to higher circ and other outputs.
In the past 10 years, Hennen has noted that the top libraries tended to be in OH, NY, IN, IL and MN and it is also reflected in LJ. This is going to change because of the recession. Now the challenge will be – how good can you be with limited resources? The higher the unemployment, the higher the use – and lower the funding. PA All Stars (libraries with 3,4 or 5 stars based on their scores) were Atglen Public Library, Carnegie Library of Pittsburgh, New Cumberland Public Library, Sewickley Public Library, Green Tree Public Library and Womelsdorf Community Library.
If Hennen and LJ give incomplete pictures, what other tools exist for benchmarking? When should a library use them? Large suburban county library in a southern state with 18 outlets. Culturally diverse, mixed income and education and good Hennen and LJ scores. There were capital projects underway financed by a bond issue and they projected insufficient operating income to staff the new or enlarged branches due to the economic downturn. They had not had an increase in local funding for many years. How could they build their case for more funding in a very competitive county government budget process? They benchmarked using IMLS data to get more detail about their performance and learned the following: Staffing was stretched to the max; sick or vacation or absent staff created huge difficulties – no one to replace them. Figures proved that they had fewer staff than peer libraries, by 30%. Statistics showed they had fewer than 2 books per capita, lower than peers, and it affected circulation and holds. Expenditures per capita were less than peer libraries. Didn’t realize their program attendance was extraordinary and something to brag about. They are a Center for the Book with many famous author visits which put them over the top. With the numbers, they educated the board; were cheerleaders for staff productivity; enlarged the conversation – not just local matters but looking at the bigger picture, neutral objectives; ultimately resulted in a presentation to county commissioners where they learned how funding was less than what sister cities got. The trustees also had some 1-on-1 meetings with elected officials and released some data to the press. Commissioners are now starting to fund additional staff positions. Library commentary became more objective because it was backed by data and true descriptions of other libraries.
Libraries annually report their output measures to their state libraries, and they pass the information on to the federal Institute of Museum and Library Services (IMLS). After processing the data, it is posted online for anyone to use. The most recent data available is from fiscal year 2007. Some states, like NJ, have more recent info online for their own states. The Hennen Rankings and Library Journal (LJ) Index assign scores to libraries based on the IMLS data. Better yet, IMLS has its own comparison tool online which provides a more detailed and useful benchmark than the 2 previous ratings.
One of the best things about the IMLS compare libraries website is you can identify peer libraries. You can find peers that you specifically name, like the library in the neighboring county, and you can ask the program to find libraries that have similar characteristics. We agree with IMLS that the best way to find similar libraries is to find ones with similar population and total operating expenditures, which statistically control for variations in size and funding. Using this tool to find peer libraries is one of the most valuable exercises you can do. If the group of libraries similar to yours in population and expenditures is large, you can further narrow it down. I like to choose the libraries that have a similar number of outlets, for example, because a library with 3 branches and one with 10 could have very different outputs, too. Some like to find peers within their own state or region. This list of peer libraries can allow you to build a benchmarking relationship. You can exchange detailed information, visit the “sister” libraries and share best practices. We’ll talk more about this a little later.
After identifying the peer comparison set, you select the information you want to see about those libraries, like circulation, materials expenditures, etc. The program produces a spreadsheet with the information that you can save online on the IMLS website for OR download and keep as an excel spreadsheet. In the past, IMLS would have the mean and median, or average, of all the libraries in the comparison group, but they have stopped doing it, much to my regret, since they thought it was inaccurate due to non-participating libraries. But it’s easy to add averages to an Excel spreadsheet if you export it that way. We like it because you can see the average circ or other data on you and your peer group, and see if your institution is above or below it. It is frustrating that this comparison tool is 1-2 years behind but another tool is only 1 year behind. And again, there are no measurements for management effectiveness, innovation, economic impact, and other statistics that can be useful for libraries. As with the other tools, this is a more detailed and complex snapshot that should lead you to identify strengths and areas of improvement which require further analysis.
Public Library Association Statistical Report The Public Library Association conducts an annual survey and publishes results roughly a year earlier than IMLS for more timely comparisons. Published annually, the PLDS Statistical Report collects information on finances, library resources, annual use figures and technology from more than 800 US/Canada public libraries who voluntarily participate. BONUS: Each year there is a special, different add-on survey highlighting one service area like children’s programs, salaries, etc. It costs $120.00 for the print version, but for $250/year you can have online access which is much, much, more useful. Online database users can view all tables, export them into Excel, compose graphs, and access historical PLDS databases beginning in 2006. U-I also manipulates the data for a hefty fee.
There are some key features that make libraries similar, and they are factors in Hennen and LJ. These help you make meaningful comparisons. They provide opportunities for sharing, learning, exchanges of best practices. Collected by IMLS and PLA (library data): Service area population Total operating expenditures Number of outlets Census data (demographics): Ethnicity of population Poverty level of population Education level of population Average income and housing of population The libraries should ideally be within about 20% of your stats in these areas to be similar.
Peer Library case study: Pacific NorthWest Coast county library system did market research and benchmarking for a strategic plan. Outreach emerged as a concern. They were not collecting much bookmboile data; didn’t know circ per stop or card registration from bkms. No metrics. Staff were entrenched and outreach leadership had been there a long time. Peer libraries with bookmobiles were identified. The comparison found theiher bookmobile was underperforming; high cost per circ compared to peers. Scheduling and location of stops not well done. Low programs and attendance. New strategic plan and goals developed for the bookmobile. Staff have received new marching orders; they were not pro-active in past. Are experimenting with new stops, weekend/evening hours, school stops and bookmobile programs. Looking at vehicle type. First time the board is looking at bkm expenditures and the return for the $$ spent – “why are we in the bookmobile business?”
Let’s look at a more complete benchmarking case study. A very large urban East Coast library already knew its peers. The library is in a highly competitive urban environment where they care about the surrounding libraries’ performance and they feel like underdogs. They were conducting a community needs assessment for strategic planning, fundraising and advocacy. Don’t use metrics and numerical analysis in decision making. LJ and Hennen scores in top quarter, but not “stars.” Benchmarking with IMLS data, which added some additional urban libraries to their local peer group found: Peers better funded Local peers had better educated and more affluent populace Library more culturally diverse than peers and higher high school dropout and unemployment rates One local peer very skilled at marketing and pr and advocacy Program attendance for library outstanding and is not promoted enough; one of the top in the country and they didn’t realize or celebrate it. Library had fewer service hours, which reduces circulation and other outputs. Helped to light a fire under a complacent board.
Although we have spelled out the limitations of the ratings and benchmarking systems, let’s restate their value: They benchmark libraries using a universal language and uniform statistics. Allow monitoring of key aspects of library performance to “help libraries to understand their operations, services, resource utilization, and user community.” (Lyons) “I think of them as a kind of filter…we do it as an advocacy tool and predictor of potential or additional success.” It’s a rough “place-marker” as where a library stands among others. Sparks your interest and motivation for deeper benchmarking. As Lyons wrote me, “it relates to what some writers call the “self-evaluating organization.” The very fact that you library looks at statistics with a creative or ingenious mindset to find ways to improve service…that act itself, not the data, enables your library to improve! You are much more likely to find better answers with that critical approach than with a keep-doing-things-the-same-way approach.” There are other things to measure such as economic impact, advocacy effectiveness, innovation, marketing success, etc.. You get to brag and/or build a case for more support with the data. You can create a peer circle, benchmarking group to share best practices, etc.
When your stats or ratings are low: Do some further study to see why – the reasons may be legitimate, whether it’s the length of loan periods, the age and roadworthiness of the bookmobile, demographics, or unreliable local funding. Use deficiencies to build a case for better funding, fundraising, partnerships. Celebrate the good things: growth, patterns Learn how peers do it better.
By The Numbers
By the Numbers: Do Hennen, LJ Ratings and Other Scores Tell the Truth? Nancy Davis, The Ivy Group Cathi Alloway, Dauphin County Library System
HENNEN WEIGHTS Expenditure Per Capita 3 Cost Per Circ 3 Visits Per Capita 3 Materials % of Budget 2 Collection Turnover 2 Circ Per FTE Staff Hour 2 Circ Per Capita 2 Circ Per Hour 2 Reference Per Capita 2 Materials $ Per Capita 2 FTE staff Per 1000 Pop 2 Periodicals Per 1000 Pop 1 Volumes Per Capita 1 Visits Per Hour 1 Circ Per Visit 1
<ul><li>LIBRARY POP SCORE %ILE </li></ul><ul><li>Hotwings Mem PL 25K 559 58% </li></ul><ul><li>Beefstock Mem PL 25K 464 43% </li></ul><ul><li>Bacon Mem PL 25K 444 40% </li></ul><ul><li>Hotwings: 2 week loan period, 2 renewals = 3 circs for 6 weeks </li></ul><ul><li>Beeftips and Bacon: 3 week loan period, 1 renewal = 2 circs for 6 weeks </li></ul>
<ul><li>Visits </li></ul><ul><li>Circulation </li></ul><ul><li>Program Attendance </li></ul><ul><li>Public Internet Use </li></ul>
$$$$ Lower poverty Higher education Bigger collection More outlets
Atglen Carnegie Library of Pittsburgh New Cumberland Sewickley Green Tree Womelsdorf
YOUR PEER LIBRARIES Collected by IMLS and PLA (library data): Service area population AND Total operating expenditures AND Number of outlets Census data (demographics): Ethnicity of population Poverty level of population Education level of population Average income and housing
By the Numbers: Do Hennen, LJ Ratings and Other Scores Tell the Truth? Nancy Davis, The Ivy Group Cathi Alloway, Dauphin County Library System
A particular slide catching your eye?
Clipping is a handy way to collect important slides you want to go back to later.