Thoughts about the 2018 AEA conference

American Evaluation Association conference in Cleveland, October 31 – November 3, 2018

My homebody tendencies were whispering that I should stay home from the AEA conference this year, as they often do when I’m confronting the reality of a trip. But in the end I went, and as usual, it was greatly worthwhile.

The conference theme was “speaking truth to power,” which seemed timely, and also prompted a lot of reflection. Each of the terms in this phrase needs to be unpacked; even the construction of the phrase requires a critical look.

My conference mission included looking for new ideas and approaches to collecting richly detailed data from program participants, and to using evaluation to support positive change. The conference offered many opportunities in this regard. It was well worth the investment!

Root cause analysis/fishbone analysis

One session, “Critiquing adult participation in education through root cause analysis,” given by Margaret Patterson, introduced me to root cause analysis, also called fishbone analysis. The project she described involved asking small groups of participants to articulate the barriers they experienced to pursuing adult education opportunities. The “fishbone” diagrams that resulted from these discussions provided insights into her participants’ perceptions. I think this technique can be a really powerful way to give people opportunities for significant input into decisions that affect them. I look forward to being able to experiment with this approach.

(As an aside, the presenter also mentioned that the participants had discussed ideas for addressing some of the identified issues. Some of these ideas sounded really creative, and they set me to thinking that many projects that are intended to address social problems are predicated on the idea that a social service organization or agency will identify needed services and then provide them. But I wonder if sometimes it would be better to support communities to identify their own solutions and then support the communities to develop their own capacities, rather than applying someone else’s solutions.)

Unlearning

Another provocative framework was introduced as “unlearning,” in a workshop by Chera Reid, Anna Cruz, Maria Rosario Jackson, and one other presenter whose name was not on the program (and, unfortunately, I didn’t write down). The workshop was called “Accepting the invitation to unlearn: Insights from seekers of systems change.” They encouraged us to think about and experience our work with new eyes. One of the presenters led us through a Feldenkrais exercise where we were given the opportunity to connect our minds with our bodily sensations of breath and movement. I like the “unlearning” frame, in part because I agree with the notion that a lot of the things we “know” may not actually be true, or at least, perhaps, not all there is to know. I suspect I have a lot of unlearning to do…

Photovoice

Immediately following this session, a group of grad students from UNC-Greensboro presented a photovoice project in which they shared photographs that symbolized the conference theme.

Systems in Evaluation

My professional home in AEA is the Systems in Evaluation Topical Interest Group (SETIG). My master’s degree was in Antioch University Seattle’s Whole Systems Design program, which allowed me to delve into the application of systems concepts, often in organizational contexts. Although I didn’t study (or even know about) evaluation at that juncture, the WSD program provided me with fundamental concepts that I’ve employed in many contexts since. It was several years before I realized that there was a group of folks in AEA who shared my interests, and for the past few years, I’ve been fortunate to have found these people.

Over the past couple of years, a number of Systems TIG folks have been working on the articulation of a set of systems principles that may help us to operationalize and apply systems concepts to evaluation. The group has currently identified four: system boundaries, perspectives, interconnection, and dynamism. A draft statement, now being revised, was circulated last summer. The TIG held a think tank session in which the attendees considered specific applications of the principles. Next will be another round of refinement of the principles statement. (I shared the current draft statement in a blog post last summer.) Beverly Parsons suggested two additional principles: holism–a system is not its parts; and power source–systems have external sources of power or animation.

Michael Quinn Patton has been talking about principle-focused evaluation lately, and the general concept of principles seems to be quite a useful frame for projects that are too varied or complex to be easily accommodated by logic models or theories of change. This seems well-aligned with the effort to develop principles of systems that can guide evaluation efforts.

What’s next?

I want/need to know more: There were several mentions of a technique called “outcomes harvesting” in a variety of sessions this year. I may have heard about this in previous years, but it didn’t penetrate my awareness. In any case, this year it seemed to be everywhere. So, I’m going to be on the lookout for an introduction to this.

There’s lots more to talk about, and I could go on all night, but I won’t. A few quick items, though: I attended a wonderful closing session consisting of skits to help us reflect on evaluation dilemmas. (Presenters included my UNCG colleague Ayesha Boyce, along with four others, and audience participants!) AEA is a wonderfully diverse group in many ways, which I greatly appreciate. I find my preconceptions challenged and always come away from the conference with new ways to think about evaluation and organizational change, and with strengthened connections with colleagues and friends.

Evaluation of adaptive versus technical challenges

Harvard leadership theorist Ronald Heifetz and his colleagues have advanced the idea that two kinds of challenges faced by organizations are technical and adaptive. A technical challenge is a problem that requires some kind of organizational tweak, the application of a well-known technique, or the like. A business might decide to implement a new machine on a production line, or a school might choose a textbook for sixth-grade English. It’s a technical problem requiring a technical solution that does not require a new way of doing business. (Here’s a video in which Heifetz discusses these two kinds of challenges.)

Adaptive challenges, however, require that people in the organization think and behave in new ways. A school might adopt a strategy in which teachers form professional learning communities, and work together to identify and implement their own approaches to helping their students. A business might determine that their employees need to create close partnerships with their customers in order to exploit the potential of emerging technologies. In other words, adaptive challenges require that organizations undergo fundamental change.

One reason that organizations are unsuccessful in addressing change is that they sometimes attempt to apply technical solutions to adaptive challenges. A school might adopt a new curriculum, or textbook, or testing system, when what is really needed is to support teachers to engage with their students in different ways. In such situations, a technical fix will not provide lasting, sustainable change.

This has implications for evaluators. A project that is designed to address an adaptive challenge should include an evaluation design that dives deeply into the organizational change processes necessary to the success of the effort. Measuring surface behaviors will be unlikely to provide the insights needed to understand how the organization, and the people within it, change and grow to address the challenge. Evaluating adaptive change initiatives is likely to require a systems-aware approach to evaluation design, and both qualitative and quantitative measures of internal processes, outcomes, and project impacts. Adaptive challenges often involve situations where solutions are not well-defined at the outset–so an appropriate evaluation strategy must be able to accommodate solutions that emerge from trial and error.

New Systems Thinking in Evaluation document

Here is the result of a project of the American Evaluation Association’s Systems in Evaluation Topical Interest Group, Principles for Effective Use of Systems Thinking in Evaluation. The group got together over several months after beginning the project at last November’s AEA conference in Washington DC.

A systems approach to evaluation seeks to apply general principles of systems to understanding the organizations and projects we evaluate. As we have conceived them for this purpose, facets of systems consist of interrelationships between system components, diverse perspectives of actors within systems, system boundaries, and dynamics. By implementing evaluation strategies based in these facets, evaluators using systems approaches can provide valuable understanding of the complex interactions among project components.