As evaluators, we are always making choices. These choices may not always be apparent at the time one makes them. We may not even have made them consciously. But in retrospect – and retrospect provides good angles for viewing and understanding oneself – one can see many of the choices made.
Even in the present, we can see the choices we’re about to make. As evaluators, I would say we make choices based not only on technical considerations but also on unspoken and perhaps unrecognized personal values or points of view. It’s important that we know what these values are, so that we can be smart about making them to serve the larger purposes that are at work in our career and life trajectories.
In support of this thesis, I’ll cite the great Minnesota bard, Bob Dylan, who wrote prophetically at a time of increasing personal awareness, “You’re gonna have to serve somebody.” I’ll give you just one of many verses with the repeating chorus.
“You may be a construction worker working on a home
You may be living in a mansion or you might live in a dome
You might own guns and you might even own tanks
You might be somebody’s landlord, you might even own banks
“But you’re gonna have to serve somebody, yes you are
You’re gonna have to serve somebody
Well, it may be the devil or it may be the Lord
But you’re gonna have to serve somebody”
The choices we make reveal our values
Over the course of a career, one can see more clearly who or what we’ve been serving. Big picture, the vineyards we choose to work in are very personal choices; our projects reflect our values. Over time, the arenas or issues we choose form our CV, our curriculum vitae, our course of life, our priorities. Becoming an evaluator is a choice. Do we evaluate family and children’s programs, or museum programs, or food distribution programs. Each is a choice. Over a career, the values guiding those choices become evident with reflection.
More close up, how we describe or frame social issues in which our work is embedded is a choice. The proposal we write in response to a perceived evaluation opportunity is filled with design choices. The program outcomes we choose to measure is a major choice – do we choose to measure mental illness or mental health? Do we choose to measure lives saved or lives lost? Do we let one simple measure decide our understanding or do we want to discover the genuine complexity of progress or impact with multiple measures?
Our choices reflect a preference for a world view rife with unspoken value assumptions of right and wrong, good and bad, favoring the haves vs. the have-nots, of how things were vs. how things could be. The language of mental illness, for example, came from 19th century views of pathology; the language of mental health came from 20th century views from holistic medicine. Times change. World views matter. Perspective and points of view matter. Evaluation is only partly a technical exercise; it’s also a valuing exercise, as the word itself tells you. Choosing to see only the technical invites ignoring the values one brings to a project.
Here’s a big one: Choosing to see the dark side of life, or to look away
For example, seeing racism is a choice. In preparing for this presentation, I was inspired by a billboard seen in Duluth, part of a local awareness campaign. It said, “It’s hard to see racism when you’re White,” which sparked outrage in many parts of White Duluth.
It’s undeniably true that seeing racism is difficult if you’re White. But it’s there. The important part to see is not so much the personal, individual attitudes we may carry. The important part to see is that the systems we Whites have built are so much a part of us, an extension of us and our values and interests, we can’t easily see how they work.
Unless we choose to look. Fish are the last to discover water, they say. We have to get out of our skin, our heads, our aquariam, and kind of step back to see how our systems serve Us and not Them. This is the wisdom of the well–known Native expression encouraging us to test our vision and understand our values by “walking a mile in another’s moccasins.” Unless we do, we non–Natives don’t really see how the ground rules are tipped in our favor. Of course not, we made these systems, and it makes sense we made them to benefit us. Fortunately, since we made these systems, we also have the ability to fix them, to make them work better as well for people not so much like us. With the help of people whose points of view had been excluded, we can design and implement win–win solutions.
Can we choose to change in the world, or at least try?
It’s my observation that most of us who become evaluators do so out of a conviction that we can help society improve. Evaluation is a legitimate way, many of us believe, to lend one’s skills to that challenge.
Unfortunately, the Western tradition in which we receive our training doesn’t help. Evaluation as a field grew out of the sciences, and the sciences have favored the long, slow road of knowledge development over working for the public good, though most scientists would acknowledge that the public good is worth serving, ultimately. I’m in favor of moving “ultimately” to a closer horizon.
Here’s a cautionary tale that helped me light my own fire. Why does it take so long for the results of our work to bring improvements to the world? For example, it took evidence from perhaps thousands of research studies conducted in the scientific tradition over dozens of years to move policy makers to act on the accumulated evidence that tobacco is a sufficiently dangerous drug that its use must be severely curtailed. Finally legislators got to that point, thanks to the persistent advocacy of change–makers and a growing groundswell of public opinion.
Back to the present, there is probably as much evidence that disparities exist in the performance of all our public systems and private markets – measures showing that Whites are favored over almost all other ethnic or cultural or racial groups in almost every arena of life – as there was that tobacco is harmful to our health.
As with acknowledging the dangers of smoked tobacco, much of society still refuses to believe all this disparities data. Maybe it’s more accurate to say that while people may believe the disparities data, they don’t yet see practical solutions, and the problems are complex enough we don’t yet see what the many types of “we” can do. Getting policy makers to that point where they can formulate and advance practical solutions is a big nut to crack. I would like to believe that evaluation can play a role in advancing change.
Being scientists vs. being advocates for improvement
The choice to help improve the state of our world requires that we acknowledge that intention. Here again our training as social scientists provides a rebuttal, telling us to be objective and dispassionate – as if that was inherently different. Must we choose between our role as social scientists and our care for the planet and its people? Can’t we be both? Can’t we be objective, dispassionate, and design smart rigorous evaluation inquiries that lean into change?
I say Yes, we can do both. This is not a new dilemma, but most of the discussion over the years has been moralistic – that is, should we be this or should we be that? I think we needn’t have to choose between two different roles, but to find one’s place on a continuum of engagement with the world. Do we seek evidence to gain knowledge or do we seek evidence to inform action? Both!
Let’s make the discussion more practical. Why not discuss how we can be partisan and keep our principles of rigor, fairness, and demands for empirical evidence? What does the skill set, the job description, or time sheet for an action–oriented program evaluator consist of? What does the course description for teaching the evaluation arts and crafts of changing the world look like?
Evaluators can expand their set of choices
On the premise that it’s easier to get forgiveness than permission, let’s start simply by expanding the boundaries of what’s called evaluation to include both work that improves theory and work that improves action. There. Done.
Those of us with research training already know how to do studies that embrace principles of science. We could begin our exploration of a more partisan role by examining key features of our evaluation designs for opportunities for the evaluation process and product themselves to have greater impact.
Imagine an opportunity to pitch an evaluation to a prospective client, however you define it. What issues of mission effectiveness are we choosing to focus on, what outcomes are we choosing to measure, and how are we choosing to measure them? Can we build in design features that will let us learn not only about the program’s effectiveness but also how the findings touch on larger issues – societal issues, planetary issues. Think big, but with grounding.
Beyond research design that answers immediate questions lies another point of focus – the consequences of our research projects. Who is going to read our studies? What actions would we like them to take? And who is “them?” The scientist part of ourselves can promote the search for better theories. The partisan part of ourselves can promote ways to use findings to advocate and advance solutions.
A simple beginning: beef up your recommendations
Consider the last section of a report of your findings. In a science journal may be called the “Implications” section. In a report to program stakeholders this is typically called “Recommendations.”
There is partisan gold in the formulation of useful recommendations. That’s why writing recommendations used to be forbidden in the temple of Science, or at least highly frowned upon; our role was only to discover. Evaluators are supposed to discover, through taking measures and making appropriate comparisons, the results of introduced innovations – but make recommendations as to how the organization could (rhymes with should) improve results or take the program further? No, “more research is needed,” goes the joke. But times have changed, as has our own sense of professional responsibility and opportunity. Carpe diem!
To whom and to what ends can recommendations be written? All the organization’s stakeholders are fair game, especially those in position to help improve the outcomes and mission effectiveness of the organization, or the system in which it’s embedded. In designing an evaluation inquiry, one should strive to answer the questions that key stakeholders are asking. A corollary is that the answers to an inquiry must be useful to those stakeholders. In short, the potential uses of an evaluation inquiry should help drive the design and conduct of the inquiry. In turn, these potential uses can also drive the formulation of its recommendations.
Inform and energize the evaluation’s stakeholders
Consider this typical set of nonprofit organizations’ stakeholders, and the nature of findings and recommendations they would find of interest.
The organization’s board members want to know if their organization is credible, worthwhile, legit, and doing things they can be proud of. Recommendations that address shortcomings should be welcomed.
Staff want to know how well their programs are working. Recommendations to address shortfalls in expectations, to improve mission effectiveness, should be welcomed.
Past and potential clients, participants, or audiences want their hopes or suspicions affirmed, so they can feel good about their past or future choices, and perhaps tell others.
Partner organizations want to know if their partnership is worthwhile. Recommendations that add value to the larger effort with recruiting other just-right partners should be welcomed.
Donors and allies want to know they’ve given their resources wisely, and, hopefully, what more they can do in support. Evaluators can recommend how development staff could give these messages in ways consistent with the data, and with organizational growth improvement opportunities.
Taxpayers and regulators want to know that their money has well spent, at least not badly or illegally spent. Recommendations to the organization’s Board that provide either reassurance or steps to be taken to fix the problem should be welcomed.
Advocates who share the interests of the organization’s purposes want to know which elements of its programs show enough promise to be scaled up, multiplied, spread elsewhere, or otherwise expanded. Recommendations can highlight these.
Policy makers in legislative, executive, judicial branches of local, state, and national governments want to know, or at least they should want to know, how to make public systems and private markets work better for more people, with decreasing disparities among different artificial groupings of people, especially those protected by law. Recommendations from an evaluation inquiry’s findings can be formulated to address this goal.
More, better benefits to the organization’s intended beneficiaries. Improved functioning of the organization. A more compelling case for improving (or the unspoken possibility of abandoning) the organization’s programs. Ramped up efforts to bring the best of a program forward to a more impactful future. Reduced disparities, increased justice, improved chances of a viable planet, world peace, etc etc.
In proposing such steps, I’m not suggesting we abandon our skills in creating rigor in our inquiries, or that we begin a project thinking we know what the findings will be, or that we favor a particular set of recommendations in advance of the discovery work. I am suggesting that we develop the partisan side of our professional practice – partisans for good program design, partisans for good stewardship of scarce resources, partisans for the creative use of well–grounded evidence, and partisans for a stronger and healthier society.
I’ve looked at the American Evaluation Association’s Guiding Principles for Evaluators and see, between the lines as well as more explicitly a sense of permission for advancing on this front.
* * *
Steven E. Mayer, Ph.D. / Effective Communities Project / February 9, 2021
Revised from a presentation made to the 2012 Annual Evaluation Conference of the American Evaluation Association, part of the Presidential Strand led by Dr. Rodney Hobson and facilitated by Dr. Ricardo Millett. The conference was in Minneapolis, hence the Bob Dylan reference. And if you’ve read this footnote you might also like this version of Dylan’s song.