Program Evaluation Part 4 – My Theory of Evaluation

performance evaluation

Introduction: Evaluations are part of our everyday lives. And yet, so few know and implement program and project evaluations in a logical and meaningful way. This four-part blog series aims to expand our collective understanding on the definitions, kinds, and implementations of evaluation and evaluation research. This part focuses on my personal theory of evaluation.

My Evolving Personal Theory of Evaluation: My ever-evolving personal evaluation theory takes these evaluation theories, merges them together, and spices them with some unique perspectives. Hallie Preskill (2013) laid out her evaluation theory by discussing her "critical assumptions concerning the purpose of evaluation, the nature of evaluation practice, the context of evaluation work, and how we communicate and report evaluation processes and findings" (p. 324). As my evaluation experience is considerably less than Dr. Preskill's evaluation experience when she wrote this chapter, I am going to approach my evaluation theory from a different angle, one that has more in common with Jennifer Greene's chapter (2013). Dr. Greene discussed some snapshots of where her theory is now as well as major influences and evaluation theories that had undergirded her theory. As I have already discussed the evaluation theories that have resonated with my evaluative thinking, I'm going to focus on major influences and how the evaluation theories intermesh with research methodologies to form my evaluation theory.

I resonated with the many important influences ranging from Rachel Carson's Silent Spring to Paolo Freire's Pedagogy of the Oppressed that Dr. Greene (2013) discussed as she assessed her own evaluation theory. My major influences include Freire and Carson but expand to other people and their ideas. I find evaluation inspiration in the scientific method (see Figure 1), in any reflective practice, including my blogs or Twitter, and in the teaching and research enterprises themselves.

Figure 1. This is my version of the Scientific Method (Sorensen-Unruh, 2018, p. 36). The design reveals an underlying question: How many times would we, as scientists, ideally like to repeat this process? The answer reflects the design layout- infinite times. The image also shows the iterative nature and experimental heart of the method (Sorensen-Unruh, 2018).

I find my best ongoing evaluation of my courses, my research, and my work happens between my students, myself, and my Twitter PLN (professional learning network). Therefore, my students and my PLN are also both major influences, mostly in terms of feedback and moving my thinking forward. Essential ideas/theories always at the forefront of my mind are: critical pedagogy (which I've discussed more in length here), social justice and decolonization, and intersectionality, as it is bound by critical race theory and implemented as critical social theory (Collins, 2019). I would assume that these essential ideas/theories factor into my evaluation theory as well by placing context, culture, and the wellbeing of those evaluated as the most important factors in any evaluation I lead. Responsive evaluation, culturally responsive evaluation, and empowerment evaluation mix together nicely to reiterate these considerations.

So, where does my evaluation theory stand? While I am flexible in my assumptions about which evaluation theory I might use for a given program or project, I do think certain aspects of my evaluative process would remain the same. Participatory/collaboration evaluation, where the goal is to evaluate the needs of those closest to the program, and evaluation methods like empowerment and responsive evaluation that intersect social justice with critical theory and that give typically oppressed groups a voice, particularly to those in power, seem like excellent evaluation methods to use to help begin a transformative process for both the evaluand and the stakeholders. Context, social justice, and cultural insight are equally important pillars in my evaluation theory. I think being consistently sensitive to these aspects of the evaluative process is critical. Using appreciative inquiry as an approach or mindset as the evaluation is completed highlights the strengths of the evaluand while also recognizing its growth edges. I think this mindset is particularly critical to those new to the evaluative process, although I think focusing on the positive psychology aspects for every evaluation is important.

Program stakeholder buy-in and involvement are integral to my evaluation theory. Without this major evaluative piece in place, I worry about compromised results as well as the overall value and use of the evaluation. I think evaluations should be iterative with quick prototyping for both the evaluand and the evaluation itself. My methodology of choice tends to be mixed methods, although during my class presentation on Alkin's Chapter 2, I thought I had evolved into more of a qualitative researcher. I really wondered for a week after the presentation why I had gotten a Statistics M.S. However, during the debate we held during class where our group argued for more quantitative approaches in evaluation, particularly when trying to build causal relationships, I was reminded that I actually enjoy thinking about ways to use numbers to describe contextual situations, and that causal relationships are more easily determined from big, open data sets. So, while I am enjoying qualitative methodologies right now, I am still a mixed methods researcher as this methodology best describes the entire evaluative picture in my opinion.

I also like holding the data collected as well as the analysis against a standard or a set of standards, whether those standards are national (like the AEA standards) or are developed standards that are specific to a certain evaluative project. I think the important thing about the standards is that they are chosen collectively by the stakeholders and the evaluator so that everyone knows what the agreed upon boundaries are. I also think standards give the benchmarks that the evaluation and the evaluand need to achieve.

Communication with all stakeholders consistently throughout the evaluative process is essential to maintaining buy-in and making sure the evaluative process is meeting the needs of the clients and stakeholders. I would want to engage in multiple forms of communication just to make sure everyone continues to understand the evaluation process. Multiple forms of communication also helps everyone involved understand where we are currently, and it helps create a continuous feedback loop.

Finally, teaching the stakeholders throughout the process about evaluation might help build Fetterman's culture of evaluation, which could help the evaluative process especially in future evaluations. Building a culture of evaluation could actually change individual behavior, which might, in turn, increase the impact of the evaluand (if it's a program, project, or business), which might increase the overall return-on-investment (ROI) for the initial evaluation (Sanchez, 2020). It's this cycle of evaluation and the possibility of continuous growth in the evaluand and stakeholders that makes the evaluative process most exciting to me. It also increases the potential impact of any evaluative process.

My evaluation theory has some static components, but for the most part, it is dynamic. I hope that each evaluation performed would be a growth journey for both me, as the evaluator, the evaluand, and the stakeholders. I also hope that each evaluation performed would help create a culture of evaluation so that evaluation becomes the norm for the stakeholders involved. Then, and only then, would we begin to see a significant ROI and individual and cultural transformation for the evaluand.

References:

Collins, P. H. (2019). Intersectionality as critical social theory. Duke University Press. https://www.dukeupress.edu/intersectionality-as-critical-social-theory

Fetterman, D. M. (2013). Empowerment evaluation: Learning to think like an evaluator. In M.C. Alkin (Ed.), Evaluation roots: A wider perspective of theorists' views and influences (2nd ed., pp. 304-322). Sage Publications. https://doi.org/10.1177/1098214013479152

Greene, J. C. (2013). Making the world a better place through evaluation. In M.C. Alkin (Ed.), Evaluation roots: A wider perspective of theorists' views and influences (2nd ed., pp. 208-217). Sage Publications. https://doi.org/10.1177/1098214013479152

Preskill, H. (2013). The transformative power of evaluation: Passion, purpose, and practice. In M.C. Alkin (Ed.), Evaluation roots: A wider perspective of theorists' views and influences (2nd ed., pp. 323-333). Sage Publications. https://doi.org/10.1177/1098214013479152

Sanchez, D. (2020). Return on investment impact studies [Unpublished PowerPoint Presentation]. University of New Mexico.

Sorensen-Unruh, C. (2018). A Reflective Teaching Evolution: Using Social Media for Teaching Reflection and Student Engagement. In Sorensen-Unruh, C. & Gupta, T. (Eds.), Communicating Chemistry Through Social Media (pp. 35-59), ACS Symposium Series 1274. American Chemical Society. https://doi.org/10.1021/bk-2018-1274.ch003

Community: