Public libraries are constantly generating statistics so they have a means of self-evaluation. At Brooklyn Public Library, we do this all of the time. This really is important: you have to have a concrete, well defined set of metrics in place in order to define your successes and failures. It is really not good enough to ask for funding for your programs based on “Oh yeah, tons of people came. It was really crowded.”
Reference services have been under fire at public libraries for a while now. People feel like they can find the information they need on the internet. Librarians have taken all kinds of different strategies to counter this assumption. We boast of roving reference, chat reference, and the librarian’s “evaluative” role in recommending good information, just to name just a few. The list goes on and on, and they are all good arguments to continue reference services. Reference work is something librarians have a white-knuckled clutch on: one way or another we will justify its existence and we will keep doing it. I can’t say I disagree, though I do see today’s reference librarian as more of an information tourguide than an information authority. Either way, if we want to show how valuable our reference services are we have to measure them somehow.
Reference surveys, just like reference work, are nothing new at public libraries. I have sat at a reference desk tallying my various interactions with patrons plenty of times. I want to present here a simple and almost too obvious problem with those surveys. Unfortunately, this problem can be extended to all kinds of different observational scenarios. The problem lies in what science and physics call the “observer effect”.
The basic premise of the observer effect is that the mere act of observing changes the outcome of your test. Typically this refers to the way the observer affects that which is observed: “for us to ‘see’ an electron, a photon must first interact with it, and this interaction will change the path of that electron”. In a survey created to justify an existing service, one that is admittedly cherished by librarians, the observer effect can work in another way. The observer’s bias can alter the outcome. The observer effect in social sciences strictly refers to the way that a subject alters their own behavior when they know they are being watched. I’m extending it to include the way that the observer alters their behavior when they know they are observing to achieve a desired result.
I’m skeptical of the wealth of data that public libraries accumulate without a strict, well controlled test environment. Imagine the case of a librarian who is all excited about the launch of their new roving reference initiative. In that case the observer wanders around the library, proactively and energetically seeking patrons with whom they can interact. Suddenly, there is a spike in reference activity! Clearly this new service model is the answer; never before have we seen such an abundance of reference questions! Sadly, this is bad data, right? Right. The observer effect came into play- the librarian’s enthusiasm skewed the test results. In another two months, when a bunch of sad and tired librarians perform the same test, reference service could show no change at all or even a decline.
This may seem like an overly verbose analysis of a glaringly obvious problem, but it is SO IMPORTANT to gather only useful, carefully arrived at statistics. It takes a lot of time and energy to gather statistics. Too much information or incorrect information is distracting garbage. Libraries and librarians have neither time nor money to waste.