There must be some children's librarians who have never experienced burn-out, who continue to face each day, each storytime, and each puppet show with enthusiasm and an unwavering sense of purpose - but I'm not one of them.
While my joy in my job has never taken a serious hit, I have sometimes questioned why we do what we do. Sometimes the answer jumps right back at me - we present storytimes to share the pleasure of books and reading with young children and their parents, for instance. We visit schools to awaken an urge in kids to come to our library. We conduct early literacy storytimes and workshops to demonstrate the power and importance of reading aloud. And so on.
However, the answer to "why do we present a Summer Reading Club" never satisfied me. (See
this post for an earlier attempt to get at the meaning of SRC) Or rather, the perfectly fine answers that everyone gives - to introduce kids to the pleasures of voluntary reading; to keep kids' reading skills from slipping over the summer; to provide a safe and fun place for kids to come during the summer - never seem to have much to do with how we actually conduct the Summer Reading Club.
And when I say "we," I'm talking my own library system, although perhaps this applies to yours as well.
What do we do? We decide on a theme (it's "Treasured Islands" this year - think "pirates," think "branch libraries as nifty interconnected islands"). We have cool graphics on flyers and reading folders. We sign kids up for the club, give them folders in which to record their books, and give small incentives to reward them for coming to the library and/or attending programs and/or reading books. We offer free weekly programs. And that's all fine.
But then it's time to fill out the report, and there are no questions about how many kids really connected with a book for the first time or about whether kids had more positive feelings about the library at the end of summer than at the beginning, or even about how many kids came to the library for the very first time in order to join the Summer Reading Club.
No, the questions are all about how many kids signed up for the Club and how many programs we presented and how many kids came to the programs. Oh, and whether the numbers had increased or decreased from the year before. True, there are questions about the effectiveness of various elements of the program (folders, incentives, etc) and about how the librarian promoted the program and about possible ideas for the next year. But notice none of the questions really measure the things that librarians value about the Summer Reading Club.
So every year as a children's librarian in a branch, I got excited about Summer Reading Club. And then the weeks went by and I either had less kids than usual (and then felt like a failure and worried about how those stats would look on my report) or I had plenty of kids, perhaps more kids than I could handle (and then I was overwhelmed and felt that I wasn't able to spend any meaningful time with any one kid because they came to my programs in a huge group, mobbed me afterwards to show me their reading folders, and then stampeded right out again).
What was missing for me was a strong sense of the outcome I wanted to achieve with my Summer Reading Club. I knew the output of my Club - the number of programs, the number of kids who came to my programs, the number of kids who signed up, and the number of kids who remained active participants all summer long - but those numbers didn't tell me (or our donors) whether the Club had any success at all in doing all those things that children's librarians want Summer Reading Club to do. Did any children read a book they really loved? Did any children read enough to keep their skills from slipping? Did kids enjoy the programs? Sure, I might have a sense of the answers to those questions, but I couldn't prove it because I wasn't measuring it.
Ah! So perhaps I was measuring the wrong things. But what exactly did I need to measure and how on earth was I to measure it? This is where the idea of outcomes comes in. In order to measure a program's success (or lack thereof), one needs to know what the outcome of the program should be - and in order to decide what the outcome should be, one needs to know a heck of a lot about the community, about the library's mission statement, about the branch's goals and objectives, and much more.
This is where it gets challenging - and fun. Outcomes can be all sorts of measurable things - gaining skills, gaining knowledge, improving attitudes, changing behavior. Let's say I've done my homework and determined that my Summer Reading Club should be all about turning kids on to the joys of reading for pleasure. My desired outcomes could be, oh... "by the end of summer, participants will have voluntarily read at least one book that they really enjoyed and they will have learned how to find more books that they will enjoy reading." Or maybe "by the end of summer, participants who identified themselves as reluctant readers will have a positive, improved attitude about reading for pleasure ." And so on. It has to be measurable, even if one doesn't actually measure it - before and after surveys might be one way, focus groups might be another, informal interviews, observation, and anecdotal accounts might all be ways of evaluating if the desired outcome has come about.
In a way, measuring is almost beside the point for me in this particular hypothetical example. The point is when I start thinking about the Summer Reading Club in terms of the outcomes I desire from it, my whole attitude about and approach to it begins to change. Suddenly every activity must be looked at anew, with the question "will this bring about my desired outcome?" I might change the way I've always "rewarded" kids for reading. I might offer different sorts of programs, or a different number. I would certainly make an attempt to infuse everything I did with the idea that there is just the right book for every child, even the most reluctant of readers.
If my library system were using the Be Creative @ Your Library campaign, I might decide that my outcome should be "participants will learn many different ways they can express their creativity, and will produce their own works using these methods." I could offer many different programs highlighting watercolor painting, writing haiku, creating music with digital technology, drumming, and on and on - and a craft component would be included. Thus each program would have its own outcome - "in the haiku program, children will learn what a haiku is and write their own" - which would combine to create the desired SRC outcome. These could be measured by simple surveys and even just by observing what kids did in the programs.
Planning programs and services for outcomes-based results is an excellent way to stave off burn-out. Suddenly I am offering the community something that has a real meaning, a real point, and a real impact - and I can prove it. It's better than offering the same old programs over and over and over. It's refreshing, it's invigorating, and it's hugely appealing to donors and government agencies.
And it's made me think about my old bugaboo, the Summer Reading Club, in a whole new way - not to mention a host of other traditional and potential programs and services.
Thinking about the outcomes I want to see from the programs and services I offer, the measurable difference, even if very small, that they can make in my patrons' lives, makes me love my job all over again.