Thoughts about the 2018 AEA conference

American Evaluation Association conference in Cleveland, October 31 – November 3, 2018

My homebody tendencies were whispering that I should stay home from the AEA conference this year, as they often do when I’m confronting the reality of a trip. But in the end I went, and as usual, it was greatly worthwhile.

The conference theme was “speaking truth to power,” which seemed timely, and also prompted a lot of reflection. Each of the terms in this phrase needs to be unpacked; even the construction of the phrase requires a critical look.

My conference mission included looking for new ideas and approaches to collecting richly detailed data from program participants, and to using evaluation to support positive change. The conference offered many opportunities in this regard. It was well worth the investment!

Root cause analysis/fishbone analysis

One session, “Critiquing adult participation in education through root cause analysis,” given by Margaret Patterson, introduced me to root cause analysis, also called fishbone analysis. The project she described involved asking small groups of participants to articulate the barriers they experienced to pursuing adult education opportunities. The “fishbone” diagrams that resulted from these discussions provided insights into her participants’ perceptions. I think this technique can be a really powerful way to give people opportunities for significant input into decisions that affect them. I look forward to being able to experiment with this approach.

(As an aside, the presenter also mentioned that the participants had discussed ideas for addressing some of the identified issues. Some of these ideas sounded really creative, and they set me to thinking that many projects that are intended to address social problems are predicated on the idea that a social service organization or agency will identify needed services and then provide them. But I wonder if sometimes it would be better to support communities to identify their own solutions and then support the communities to develop their own capacities, rather than applying someone else’s solutions.)

Unlearning

Another provocative framework was introduced as “unlearning,” in a workshop by Chera Reid, Anna Cruz, Maria Rosario Jackson, and one other presenter whose name was not on the program (and, unfortunately, I didn’t write down). The workshop was called “Accepting the invitation to unlearn: Insights from seekers of systems change.” They encouraged us to think about and experience our work with new eyes. One of the presenters led us through a Feldenkrais exercise where we were given the opportunity to connect our minds with our bodily sensations of breath and movement. I like the “unlearning” frame, in part because I agree with the notion that a lot of the things we “know” may not actually be true, or at least, perhaps, not all there is to know. I suspect I have a lot of unlearning to do…

Photovoice

Immediately following this session, a group of grad students from UNC-Greensboro presented a photovoice project in which they shared photographs that symbolized the conference theme.

Systems in Evaluation

My professional home in AEA is the Systems in Evaluation Topical Interest Group (SETIG). My master’s degree was in Antioch University Seattle’s Whole Systems Design program, which allowed me to delve into the application of systems concepts, often in organizational contexts. Although I didn’t study (or even know about) evaluation at that juncture, the WSD program provided me with fundamental concepts that I’ve employed in many contexts since. It was several years before I realized that there was a group of folks in AEA who shared my interests, and for the past few years, I’ve been fortunate to have found these people.

Over the past couple of years, a number of Systems TIG folks have been working on the articulation of a set of systems principles that may help us to operationalize and apply systems concepts to evaluation. The group has currently identified four: system boundaries, perspectives, interconnection, and dynamism. A draft statement, now being revised, was circulated last summer. The TIG held a think tank session in which the attendees considered specific applications of the principles. Next will be another round of refinement of the principles statement. (I shared the current draft statement in a blog post last summer.) Beverly Parsons suggested two additional principles: holism–a system is not its parts; and power source–systems have external sources of power or animation.

Michael Quinn Patton has been talking about principle-focused evaluation lately, and the general concept of principles seems to be quite a useful frame for projects that are too varied or complex to be easily accommodated by logic models or theories of change. This seems well-aligned with the effort to develop principles of systems that can guide evaluation efforts.

What’s next?

I want/need to know more: There were several mentions of a technique called “outcomes harvesting” in a variety of sessions this year. I may have heard about this in previous years, but it didn’t penetrate my awareness. In any case, this year it seemed to be everywhere. So, I’m going to be on the lookout for an introduction to this.

There’s lots more to talk about, and I could go on all night, but I won’t. A few quick items, though: I attended a wonderful closing session consisting of skits to help us reflect on evaluation dilemmas. (Presenters included my UNCG colleague Ayesha Boyce, along with four others, and audience participants!) AEA is a wonderfully diverse group in many ways, which I greatly appreciate. I find my preconceptions challenged and always come away from the conference with new ways to think about evaluation and organizational change, and with strengthened connections with colleagues and friends.

Learning without being taught

I have long been an admirer of Victor Wooten’s bass playing, ever since my brother-in-law gave me a Bela Fleck and the Flecktones CD, many years ago. He’s a brilliant musician.

Last night as I was randomly searching through TED talks on YouTube, I noticed that he had done one entitled “Music as a Language.” I watched it and was completely enthralled.

Wooten points out that we learn language without being “taught,” and how he learned to play music by growing up in a music-rich environment. His focus in this talk is about learning to express the music that is within each of us, and I think his insights could easily apply to life and learning much more broadly.

The thought emerged for me: What would it be like if schools were organized based on this insight? In a way, that’s the philosophy of makerspaces and language immersion programs. What would it be like if we stopped making children learn, and started letting them learn instead?

Here’s a video of Wooten’s TED talk.

Grades, indicators, and learning

Here’s an interesting (to me) reflective piece by author Dan Houck about grading, from Inside Higher Education. I think the writer has expressed the misgivings many educators feel about distilling the complex phenomenon of student learning down to a single letter or number.

I’m reminded of a conversation I once had with Peter Ewell, when I was working at a small liberal arts college in the Midwest and he was helping us think about related issues. We got onto the subject of indicators, of which grading is a good example–measures that stand in for more complex phenomena which are not easily measured. Dr. Ewell pointed out that indicators are reasonably useful until consequences are attached to them. When that happens, powerful motivations arise to distort the inputs to the measures, or to the measures themselves. We start to pay more attention to the indicator than to the underlying phenomenon it is meant to illuminate.

In the case of grading, we start to pay attention to the letter or number, rather than to the learning or performance it is supposed to indicate, with all kinds of unintended consequences. As an example, there have been numerous stories in the media about the Hope Scholarship’s requirement that students maintain a “B” average, and scholarship students’ choice of less challenging coursework to ensure that they maintain their eligibility. At least in some cases, rather than taking the hard, but possibly more rewarding, courses, paying attention to the indicator has the effect of increasing risk-aversion among some students.

In a way, though, the question of how–or whether–to grade students, should be accompanied by the more central question of how to engage students in meaningful and valuable learning, as I think Mr. Houck articulates in his reflections. What will help the students find their own reasons to want to squeeze every drop of value out of their school experiences? How best can their teachers help them do that? How can professors, schools and colleges design learning environments where that is at the top of the agenda?

Evaluation of adaptive versus technical challenges

Harvard leadership theorist Ronald Heifetz and his colleagues have advanced the idea that two kinds of challenges faced by organizations are technical and adaptive. A technical challenge is a problem that requires some kind of organizational tweak, the application of a well-known technique, or the like. A business might decide to implement a new machine on a production line, or a school might choose a textbook for sixth-grade English. It’s a technical problem requiring a technical solution that does not require a new way of doing business. (Here’s a video in which Heifetz discusses these two kinds of challenges.)

Adaptive challenges, however, require that people in the organization think and behave in new ways. A school might adopt a strategy in which teachers form professional learning communities, and work together to identify and implement their own approaches to helping their students. A business might determine that their employees need to create close partnerships with their customers in order to exploit the potential of emerging technologies. In other words, adaptive challenges require that organizations undergo fundamental change.

One reason that organizations are unsuccessful in addressing change is that they sometimes attempt to apply technical solutions to adaptive challenges. A school might adopt a new curriculum, or textbook, or testing system, when what is really needed is to support teachers to engage with their students in different ways. In such situations, a technical fix will not provide lasting, sustainable change.

This has implications for evaluators. A project that is designed to address an adaptive challenge should include an evaluation design that dives deeply into the organizational change processes necessary to the success of the effort. Measuring surface behaviors will be unlikely to provide the insights needed to understand how the organization, and the people within it, change and grow to address the challenge. Evaluating adaptive change initiatives is likely to require a systems-aware approach to evaluation design, and both qualitative and quantitative measures of internal processes, outcomes, and project impacts. Adaptive challenges often involve situations where solutions are not well-defined at the outset–so an appropriate evaluation strategy must be able to accommodate solutions that emerge from trial and error.

Validation of research instruments

Lately I’ve been working with colleagues on a validation study of a classroom observation instrument  they developed for use in undergraduate mathematics classes. While thinking about the basic question, what is validity? I came across a seminal article on the subject by Samuel Messick (1995 “Validity of psychological assessment,” American Psychologist, 50:9, 741-9). He said, “Validity is not a property of the test or assessment as such, but rather of the meaning of the test scores” (p. 741). In other words, validity is established for uses of instruments and interpretation of results. It’s inappropriate to say that a survey or test has been validated; rather, you have to specify how the instrument will be used and how the results will be interpreted.

A good example of this is a hypothetical validation study of a foot-long ruler. The relevant validation question might be: Does this instrument accurately measure distances? But this question is not a focused enough. Our validation study would find that the ruler is a valid instrument for measuring distances between, say 1/16 of an inch and 12 inches, along a more-or-less straight line/flat surface, to an accuracy of perhaps 1/32″. So, within those parameters, the ruler is a “valid” instrument. Or rather, it will yield valid measurements within those constraints. On the other hand, it would not yield accurate or useful results if you wanted to measure distances of more than a few feet, or on a curved or irregular surface, or, say, the thickness of a piece of wire. So, for these uses it is not a valid instrument.

There is a tendency in social sciences to use measurements for all kinds of things for which they are not valid measures. For example, standardized tests of student content knowledge may (or may not) be valid measures of the educational achievement of groups of students. Because of the technical qualities of the tests, those same measures may not be valid measures of individual students, or of the competence of their teachers, or of the quality of their schools. Researchers and evaluators should not ask, is this a validated instrument? Rather, the question is, is this instrument valid for my proposed purpose?

How to make sure that PBL is applied well in schools

Helpful perspective from Mind/Shift at KQED by Thom Markham, PBL Global. This article responds to some current critiques of PBL, but focuses mostly on the way that educators’ perspectives need to change in order to implement PBL successfully. As he says, the point of PBL is not to sneakily manipulate students into learning that is focused on the standards, but to go far deeper into real learning that results in high levels of self-directed learning. This is not a long piece, but has a lot of good ideas.

Project-Based Learning (PBL) resources

Project-based learning (PBL) is an approach to teaching and learning that engages students in ways that traditional classrooms often don’t. One group that provides resources to help teachers create and implement PBL is the Buck Institute for Education. They have created a framework called “Gold-Standard” PBL, and many of the resources they offer are free on their website.

Also, Thom Markham is a PBL advocate, leadership coach, and prolific tweeter. His organization is PBL Global.

The CSTP Teacher Leadership Framework

I’ve been working with some mathematics educators recently on a grant proposal to develop teacher leadership in mathematics and found an organization that’s doing great work in teacher leadership, the Center for Strengthening the Teaching Profession (CSTP). They’ve developed a set of tools to help implement and evaluate their Teacher Leadership Framework. If this project is funded, I hope we’ll be able to use this framework to help understand the teacher leadership development process.

I’d be interested to hear from others who are using this or other frameworks to develop or study teacher leadership.

Undergraduate engineering students research smart structures

This summer, undergraduate engineering students participated in a collaborative research experience, in an innovative collaboration between engineering companies, the University of South Carolina, and San Francisco State University. They were learning about smart structures technology (SST) that helps buildings withstand earthquakes and other natural events.