
The Choices We Make as Evaluators: You Gotta Serve Somebody
By Steven E. Mayer, Ph.D
Those of us with research training already know how to do studies that embrace principles of science. We could begin our exploration of a more partisan role by examining key features of our evaluation designs for opportunities for the evaluation process and product themselves to have greater impact.
Slightly revised from a presentation made to the 2012 Annual Evaluation Conference of the American Evaluation Association, as part of the Presidential Strand led by Dr. Rodney Hobson and facilitated by Dr. Ricardo Millett. The conference was in Minneapolis, hence the Bob Dylan reference.
I’m a professional evaluator, a professional nonprofit program evaluator, and this audience is all professional evaluators, members of the American Evaluation Association. As evaluators, we are always making choices. These choices may not always be apparent at the time one makes them. We may not even have made them consciously. But in retrospect – and retrospect provides good angles for viewing and understanding oneself – one can see many of the choices made, and perhaps the motives and values underlying them.
Even in the present, we can often see the choices we’re about to make. As evaluators, I would say we make choices based not only on technical considerations but also on unspoken and perhaps unrecognized personal values or points of view. It’s important that we know what these values are, so that we can be smart about making them in ways that stand to serve the larger purposes that define our career and carve our life trajectories.
In support of this thesis, I’ll cite the great Minnesota bard, Bob Dylan, who wrote prophetically at a time of increasing personal awareness, “You’re gonna have to serve somebody.” I’ll give you just one of many verses with the repeating chorus.
“You may be a construction worker working on a home
You may be living in a mansion or you might live in a dome
You might own guns and you might even own tanks
You might be somebody’s landlord, you might even own banks.
“But you’re gonna have to serve somebody, yes you are
You’re gonna have to serve somebody
Well, it may be the devil or it may be the Lord
But you’re gonna have to serve somebody”
The choices we make reveal our values
Over the course of a career, one can see more clearly who or what we’ve been serving. Big picture, the vineyards we choose to work in are very personal choices; our projects reflect our values. Over time, the arenas or issues we choose form our CV, our curriculum vitae, our course of life. Becoming an evaluator is a choice. Then, do we choose to evaluate family and children’s programs, anti-hunger programs, system change efforts – in our own communities or elsewhere? Each is a choice. Over a career, the values guiding those choices become evident with reflection.
Closer up and more personal, how we describe or frame social issues in which our work is embedded is a choice. The proposal we write in response to a perceived evaluation opportunity is filled with design choices. The program outcomes we choose to measure is a major choice – do we choose to measure mental illness or mental health? Those are different. Do we choose to measure lives saved or lives lost? Do we let one simple measure decide our understanding or do we want to discover the genuine complexity of progress or impact with multiple measures?
Our choices reflect a preference for a world view rife with unspoken value assumptions of right and wrong, good and bad, favoring the haves or favoring the have-nots, of how things were or how things could be. The language of mental illness, for example, came from 19th century views of pathology; the language of mental health came from 20th century views from holistic medicine. Times change. World views matter. Perspective and points of view matter. Evaluation is only partly a technical exercise; it’s also a valuing exercise, as the word itself tells you. Choosing to see only the technical invites ignoring the values one brings to a project.
Walking a mile in another’s moccasins
Seeing racism is a choice. In preparing for this presentation, I was influenced by a billboard seen in Duluth, part of a local awareness campaign. It said, “It’s hard to see racism when you’re White,” which sparked outrage in many parts of White Duluth.
It’s undeniably true that seeing racism is difficult if you’re White. But it’s there. The important part to see is not so much the personal, individual attitudes we may carry. The important part to see is that the systems we Whites have built are so much a part of us, an extension of us and our values and interests, we can’t easily see how they work.
Unless we choose to look. Fish are the last to discover water, they say. We have to get out of our skin, our heads, our aquarium, and kind of step back to see how our systems might serve Us more than Them. This is the wisdom of the well–known Native expression encouraging us to test our vision and understand our values by “walking a mile in another’s moccasins.” Unless we do, we non–Natives don’t really see how the ground rules are tipped in our favor. Of course don’t, we made these systems, and it makes sense we made them to benefit us. Fortunately, since we made these systems, we also have the ability to fix them, to make them work better for Them as much as for Us. If we include the help of people whose points of view had been excluded, we can design and implement win–win solutions.
Can we choose to support change in the world?
It’s my observation that most of us who become evaluators do so out of the conviction that we can help society improve. Evaluation is a legitimate way, many of us believe, to lend one’s skills to that challenge.
Unfortunately, the Western tradition in which we receive our training doesn’t help. Evaluation as a field grew out of the sciences, and the sciences have favored the long, slow road of knowledge development over working for the public good now. I think most scientists would acknowledge that the public good is worth serving, ultimately, but I’m in favor of moving “ultimately” to a closer horizon.
Why does it take so long for the results of scientific work to bring improvements to the world? Here’s a cautionary tale that helped me light my own fire. It took evidence from perhaps thousands of research studies conducted in the scientific tradition over dozens of years before policy makers finally acknowledged the accumulated evidence that tobacco is a sufficiently dangerous drug that its use must be severely curtailed. Finally legislators got to the point of action, thanks to the persistent advocacy of change–makers, a growing groundswell of public opinion, and lawsuits. Must it be like that?
Back to the present, there is probably as much evidence that disparities exist in the performance of all our public systems and private markets – measures showing that Whites are favored over almost all other ethnic or cultural or racial groups in almost every arena of life – as there was that tobacco is harmful to our health.
As with acknowledging the dangers of smoked tobacco, much of society still refuses to believe all these disparities data. Maybe it’s more accurate to say that while people may believe the disparities data, they don’t yet see what “we” can do. “We” don’t even have to believe we caused it, we only have to believe we can correct it. Getting policy makers to that point where they can formulate and advance practical solutions is a big nut to crack. I would like to believe that evaluation can play a role in advancing change.
Being scientists vs. being advocates for improvement
The choice to help improve the state of our world requires that we acknowledge that intention. Here again our training as social scientists provides what might be seen as pushback, telling us to be objective and dispassionate. Must we choose between our role as social scientists and our care for the planet and its people? Can’t we be both? Can’t we be objective, dispassionate, and design smart rigorous evaluation inquiries that lean into change? Can we be objective and passionate in advocacy for change? Doesn’t awareness of our tendencies help create change within ourselves?
I say Yes, we can do both. This is not a new dilemma, but most of the discussion over the years has been moralistic – that is, should we be this or should we be that? I think we needn’t have to choose between two different roles, but we as individuals have the opportunity to find our place on a continuum of engagement with the world. Do we seek evidence to gain knowledge, or do we seek evidence to inform action? Both!
Let’s make the discussion more practical. Skip the should question, and explore the how question. Why not discuss how we can be partisan and keep our principles of rigor, fairness, and demands for empirical evidence? What does the skill set, the job description, or time sheet for an action–oriented program evaluator consist of? What does the course description for teaching the evaluation arts and crafts of changing the world look like?
Evaluators can expand their set of choices
On the premise that it’s easier to get forgiveness than permission, let’s start simply by expanding the boundaries of what’s called evaluation to include both work that improves theory and work that improves action. There. Done.
Those of us with research training already know how to do studies that embrace principles of science. We could begin our exploration of a more partisan role by examining key features of our evaluation designs for opportunities for the evaluation process and product themselves to have greater impact.
Imagine an opportunity to pitch an evaluation to a prospective client, however you define it. What issues of mission effectiveness are we choosing to focus on, what outcomes are we choosing to measure, and how are we choosing to measure them? Can we build in design features that will let us learn not only about the program’s effectiveness but also how the findings touch on larger issues – societal issues, planetary issues. Think big, but with grounding.
Beyond research design that answers immediate questions lies another point of focus – the consequences of our research projects. Who is going to read our studies? What actions would we like them to take? And who is “them?” The scientist part of ourselves can promote the search for better theories. The partisan part of ourselves can promote ways to use findings to advocate and advance solutions.
A simple beginning: make the recommendations you put in your reports more actionable
Consider the last section of a report of your findings. In a science journal this may be called the “Implications” section. In a report to program stakeholders this is typically called “Recommendations.”
There is partisan gold in the formulation of useful recommendations. That’s why writing recommendations used to be forbidden in the temple of Science, or at least highly frowned upon; our role was only to discover. Evaluators as scientists are supposed to discover, through taking measures and making appropriate comparisons, the results of introduced innovations – but make recommendations as to how the innovating organization could (rhymes with should) improve results through recommended changes in their actions? No. We’re taught to say, “More research is needed.” But times have changed, as has our own sense of professional responsibility and opportunity. Carpe diem!
To whom and to what ends can recommendations be written? All the organization’s stakeholders are fair game, especially those in position to help improve the outcomes and mission effectiveness of the organization, or the system in which it’s embedded. In designing an evaluation inquiry, one should strive to answer the questions that key stakeholders are asking, that’s a given. A corollary is that the answers to an inquiry must be useful to those stakeholders. In short, the potential uses of an evaluation inquiry should help drive the design and conduct of the inquiry. Michael Q. Patton, esteemed colleague from just down the street, taught us this decades ago. In turn, these potential uses can also drive the wording and formulation of the recommendations that flow from the findings.
Inform and energize the evaluation’s stakeholders
Consider this typical set of nonprofit organizations’ stakeholders, and the nature of findings and recommendations they would find of interest.
- The organization’s board members want to know if their organization is credible, worthwhile, legit, and doing things they can be proud of. Recommendations that address shortcomings should be welcomed.
- Staff want to know how well their programs are working. Recommendations to address shortfalls in expectations, to improve mission effectiveness, should be welcomed.
- Past and potential clients, participants, or audiences want their hopes or suspicions affirmed, so they can feel good about their past or future choices, and perhaps tell others.
- Partner organizations want to know if their partnership is worthwhile. Recommendations that add value to the larger effort with recruiting other just-right partners should be welcomed.
- Donors and allies want to know they’ve given their resources wisely, and, hopefully, what more they can do in support. Evaluators can recommend how development staff could give these messages in ways consistent with the data, and with organizational growth improvement opportunities.
- Taxpayers and regulators want to know that their money has well spent, at least not badly or illegally spent. Recommendations to the organization’s Board that provide either reassurance or steps to be taken to fix the problem should be welcomed.
- Advocates who share the interests of the organization’s purposes want to know which elements of its programs show enough promise to be scaled up, multiplied, spread elsewhere, or otherwise expanded. Recommendations can highlight these.
- Policy makers in legislative, executive, judicial branches of local, state, and national governments want to know, or at least they should want to know, how to make public systems and private markets work better for more people, with decreasing disparities among different artificial groupings of people, especially those protected by law. Recommendations from an evaluation inquiry’s findings can be formulated to address this goal.
And the consequences of more informed and energized stakeholders?
Instead of “more research is needed,” we can imagine this:
- A more compelling case for improving (or the unspoken possibility of abandoning) the organization’s programs.
- Improved functioning of the organization.
- More, better benefits to the organization’s intended beneficiaries.
- Ramped up efforts to bring the best of a program forward to a more impactful future. Reduced disparities, increased justice, improved chances of a viable planet, world peace, etc etc.
In proposing such steps, I’m not suggesting we abandon our skills in creating rigor in our inquiries, or that we begin a project thinking we know what the findings will be, or that we favor a particular set of recommendations in advance of the discovery work. I am suggesting that we develop the partisan side of our professional practice – partisans for good program design, partisans for good stewardship of scarce resources, partisans for the creative use of well–grounded evidence, and partisans for a stronger and healthier society.
I’ve looked at the American Evaluation Association’s Guiding Principles for Evaluators and see, between the lines as well as more explicitly, a sense of permission for advancing on this front.
And if you’ve read this far you might also like these versions of Dylan’s song.
* * *
This blogpost was published in an earlier form to this website on February 9, 2021.
How to cite this blogpost: Mayer, Steven E., The Choices We Make as Evaluators: You Gotta Serve Somebody. Minneapolis: Effective Communities Project. Downloaded from EffectiveCommunities.com [month, date, year]