I want to share some information with you from a resource I mentioned last month. The resource is Edward Suchman’s 1967 book, Evaluative Research and the information is this diagram, which presents a basic model of evaluation:1
I share the diagram because it presents two ideas that don’t always percolate to the top of discussions of library outcome assessment. The first idea is the need for programmatic values to be made explicit beforehand. Suchman, who worked in the public health field, gave this example:
Suppose we begin with the value that it is better for people to have their own teeth rather than false teeth. We may then set our goal that people shall retain their teeth as long as possible.2
Of course, it’s quite possible to hold different values. For instance, one might prefer false teeth over natural ones. This might be based on the belief that lifelong dental care is too expensive. And maybe on the realization that technological advances have made false teeth more durable than natural teeth. In this case the whole idea of program success would work differently. The deterioration of natural teeth would be seen as progress since it prepares citizens for the advantageous transition to artificial ones.
An absurd example, probably. But it’s a useful way to illustrate how evaluation works regardless of the specific values involved. You cannot “demonstrate value” without having first announced what values you subscribe to. These values, whatever their content, define the mission (long-term goals) of institutions and agencies—civic participation, lifelong learning, lifelong dentures, and so on. The demonstrable value of a program is a measure (an estimate, actually) of the extent to which the program accomplishes the goals implied by the values.
The espoused values will be ideal, something like “good dental health improves the quality of life of the public.” Goals will be more specific, like “80% of Americans will have access to affordable natural teeth replacement.” Related objectives would along the lines of “Tooth extraction services will be convenient, inexpensive, painless, and safe.”
Best Laid Plans
The second idea in Suchman’s diagram is that assessment is part of a more general planning process. Planning is another key reason for specifying goals (desired outcomes) up front. Without this information program designers are in the dark about what the program is meant to accomplish. It will be no surprise, then, that interventions and methods which designers come up with turn out to be stabs in the dark!
A well-specified goal would be “70% of American adults will have false teeth within the next ten years.” Another would be “All American children will have false teeth by the time they are in 9th grade.” This sort of clarity tells program designers (and funders, also) exactly where the program is supposed to be headed.
Then it’s up to the program designers to figure out how to get there. In my dental health example, there will be multiple ways to convince the public that they will really love having manufactured teeth in their own and their children’s mouths. One way would be encouraging neglect of natural teeth, for instance, by making sugar readily available to pre-school children. Another might be levying a weighty tax on tooth brushes. You get the idea: potentially effective methods must be thought through with the basic goals in mind. This exercise will also lead to relevant drill-down objectives. (Sorry, I couldn’t resist.)
Suchman called this process Identifying Goal Activity (6pm in the diagram), which means devising intervention methods that seem likely to attain program objectives.3 Next is the actual enactment of the program, labeled Putting Goal Activity into Operation (8pm in the diagram).4
Only after the first five tasks (noon to 8pm on the circle) have been completed can program outcomes be assessed. Results from the outcome assessment(s) are then reconciled with original values. (I purposely avoid the term alignment because of its faddishness!) This might lead to clarifications of or adjustments to values, goals, objectives, measures, and program activities. And then the cycle repeats again, as the continuous clockwise arrows indicate.
Now consider Richard Orr’s classic evaluation model shown here:5
Notice that values are missing from this framework. The main point of the model is classifying traditional library statistics in a way that distinguishes these from ostensible benefits which library programs and services produce. Unlike the Suchman diagram, arrows in Orr’s model indicate causality. Resources necessarily produce (cause) capability which necessarily produces demand which necessarily produces utilization which necessarily produces beneficial effects. Obviously, Orr adopted a simplistic view of things. (Because of the certainty it implies the model can be called deterministic.)
Ye Olde Evaluation Steps
In real life, cause-and-effect linkages between programs and outcomes are hardly automatic. (We could say they are probabilistic.) Program evaluation practitioners understood this sixty years ago as seen in these “essential steps in evaluation” taken from a 1955 U.S. Public Health Service report:
1. Identification of the goals to be evaluated
2. Analysis of the problems with which the [program] activity must cope
3. Description and standardization of the [program] activity6
4. Measurement of the degree of change that takes place [in the target population]
5. Determination of whether the observed change is due to the [program] activity or to some other cause
6. Some indication of the durability of the efforts.7
Step 5 is the one I’m talking about. Now take a look at this diagram from the Suchman book:8
We can explore this diagram in depth some later time, but you can get the gist pretty quickly: Early evaluation practitioners realized that cause-and-effect relationships are really complicated. They did not settle for the optimistic linkages of the sort appearing in Orr’s model.
As for the essential steps from the 1955 report, you’ll see these basic ideas repeated in introductory books and articles on library assessment, sometimes presented as if they are a 21st century innovation! In actual fact, the ideas are well established. It is the library profession’s awareness of these that is new.
Truthfully, libraries haven’t really cut their teeth on the basic concepts of evaluation and assessment, like the steps listed above. I have been to assessment conferences and committee meetings where participants have only the vaguest understanding of essential ideas like these. Worse, these newcomers have to contend with misleading pronouncements made in (shall we call it) the professional narrative, the most egregious of which is the mantra that assessment is about demonstrating value. It is not. And I appeal to you to eschew such an ill-conceived idea!
Assessment is about determining value.
1 Suchman, E. A. (1967). Evaluative research: Principles and practice in public service and social action programs, New York: Russell Sage, p.34.
2 Suchman, E. A., p. 35.
3 This activity is the same as the idea of logic models mentioned in my prior blog entry. In evaluation literature, a description of how program designers believe a program will work, including causal linkages between program components, is also called program theory.
4 In current evaluation literature this phase is called program implementation.
5 Orr, R.H. (1973). Measuring the goodness of library services: A general framework for considering quantitative measures, Journal of documentation, 29(3), 315-332. Orr’s model also came up in my April 2009 and my January 2012 post.
6 Here the term standardization refers to the idea of program fidelity introduced in my earlier entry.
7 U.S. Department of Health Education & Welfare, (1955). Evaluation in mental health: A review of the problem of evaluating mental health activities, (U.S. Public Health Service Publication No. 413), Washington DC: U.S. Government Printing Office, 21. Underlining added. This report borrowed these essential steps from French, D. G., (1952). An approach to measuring results in social work: A report on the Michigan reconnaissance study of evaluative research in social work, New York: Columbia University Press, 178.
8 Suchman, E. A., p. 84.