Here’s an interesting (to me) reflective piece by author Dan Houck about grading, from Inside Higher Education. I think the writer has expressed the misgivings many educators feel about distilling the complex phenomenon of student learning down to a single letter or number.
I’m reminded of a conversation I once had with Peter Ewell, when I was working at a small liberal arts college in the Midwest and he was helping us think about related issues. We got onto the subject of indicators, of which grading is a good example–measures that stand in for more complex phenomena which are not easily measured. Dr. Ewell pointed out that indicators are reasonably useful until consequences are attached to them. When that happens, powerful motivations arise to distort the inputs to the measures, or to the measures themselves. We start to pay more attention to the indicator than to the underlying phenomenon it is meant to illuminate.
In the case of grading, we start to pay attention to the letter or number, rather than to the learning or performance it is supposed to indicate, with all kinds of unintended consequences. As an example, there have been numerous stories in the media about the Hope Scholarship’s requirement that students maintain a “B” average, and scholarship students’ choice of less challenging coursework to ensure that they maintain their eligibility. At least in some cases, rather than taking the hard, but possibly more rewarding, courses, paying attention to the indicator has the effect of increasing risk-aversion among some students.
In a way, though, the question of how–or whether–to grade students, should be accompanied by the more central question of how to engage students in meaningful and valuable learning, as I think Mr. Houck articulates in his reflections. What will help the students find their own reasons to want to squeeze every drop of value out of their school experiences? How best can their teachers help them do that? How can professors, schools and colleges design learning environments where that is at the top of the agenda?
Harvard leadership theorist Ronald Heifetz and his colleagues have advanced the idea that two kinds of challenges faced by organizations are technical and adaptive. A technical challenge is a problem that requires some kind of organizational tweak, the application of a well-known technique, or the like. A business might decide to implement a new machine on a production line, or a school might choose a textbook for sixth-grade English. It’s a technical problem requiring a technical solution that does not require a new way of doing business. (Here’s a video in which Heifetz discusses these two kinds of challenges.)
Adaptive challenges, however, require that people in the organization think and behave in new ways. A school might adopt a strategy in which teachers form professional learning communities, and work together to identify and implement their own approaches to helping their students. A business might determine that their employees need to create close partnerships with their customers in order to exploit the potential of emerging technologies. In other words, adaptive challenges require that organizations undergo fundamental change.
One reason that organizations are unsuccessful in addressing change is that they sometimes attempt to apply technical solutions to adaptive challenges. A school might adopt a new curriculum, or textbook, or testing system, when what is really needed is to support teachers to engage with their students in different ways. In such situations, a technical fix will not provide lasting, sustainable change.
This has implications for evaluators. A project that is designed to address an adaptive challenge should include an evaluation design that dives deeply into the organizational change processes necessary to the success of the effort. Measuring surface behaviors will be unlikely to provide the insights needed to understand how the organization, and the people within it, change and grow to address the challenge. Evaluating adaptive change initiatives is likely to require a systems-aware approach to evaluation design, and both qualitative and quantitative measures of internal processes, outcomes, and project impacts. Adaptive challenges often involve situations where solutions are not well-defined at the outset–so an appropriate evaluation strategy must be able to accommodate solutions that emerge from trial and error.
Lately I’ve been working with colleagues on a validation study of a classroom observation instrument they developed for use in undergraduate mathematics classes. While thinking about the basic question, what is validity? I came across a seminal article on the subject by Samuel Messick (1995 “Validity of psychological assessment,” American Psychologist, 50:9, 741-9). He said, “Validity is not a property of the test or assessment as such, but rather of the meaning of the test scores” (p. 741). In other words, validity is established for uses of instruments and interpretation of results. It’s inappropriate to say that a survey or test has been validated; rather, you have to specify how the instrument will be used and how the results will be interpreted.
A good example of this is a hypothetical validation study of a foot-long ruler. The relevant validation question might be: Does this instrument accurately measure distances? But this question is not a focused enough. Our validation study would find that the ruler is a valid instrument for measuring distances between, say 1/16 of an inch and 12 inches, along a more-or-less straight line/flat surface, to an accuracy of perhaps 1/32″. So, within those parameters, the ruler is a “valid” instrument. Or rather, it will yield valid measurements within those constraints. On the other hand, it would not yield accurate or useful results if you wanted to measure distances of more than a few feet, or on a curved or irregular surface, or, say, the thickness of a piece of wire. So, for these uses it is not a valid instrument.
There is a tendency in social sciences to use measurements for all kinds of things for which they are not valid measures. For example, standardized tests of student content knowledge may (or may not) be valid measures of the educational achievement of groups of students. Because of the technical qualities of the tests, those same measures may not be valid measures of individual students, or of the competence of their teachers, or of the quality of their schools. Researchers and evaluators should not ask, is this a validated instrument? Rather, the question is, is this instrument valid for my proposed purpose?
Helpful perspective from Mind/Shift at KQED by Thom Markham, PBL Global. This article responds to some current critiques of PBL, but focuses mostly on the way that educators’ perspectives need to change in order to implement PBL successfully. As he says, the point of PBL is not to sneakily manipulate students into learning that is focused on the standards, but to go far deeper into real learning that results in high levels of self-directed learning. This is not a long piece, but has a lot of good ideas.
Project-based learning (PBL) is an approach to teaching and learning that engages students in ways that traditional classrooms often don’t. One group that provides resources to help teachers create and implement PBL is the Buck Institute for Education. They have created a framework called “Gold-Standard” PBL, and many of the resources they offer are free on their website.
Also, Thom Markham is a PBL advocate, leadership coach, and prolific tweeter. His organization is PBL Global.
I’ve been working with some mathematics educators recently on a grant proposal to develop teacher leadership in mathematics and found an organization that’s doing great work in teacher leadership, the Center for Strengthening the Teaching Profession (CSTP). They’ve developed a set of tools to help implement and evaluate their Teacher Leadership Framework. If this project is funded, I hope we’ll be able to use this framework to help understand the teacher leadership development process.
I’d be interested to hear from others who are using this or other frameworks to develop or study teacher leadership.
This summer, undergraduate engineering students participated in a collaborative research experience, in an innovative collaboration between engineering companies, the University of South Carolina, and San Francisco State University. They were learning about smart structures technology (SST) that helps buildings withstand earthquakes and other natural events.
The Transforming Teaching through Technology project at the University of North Carolina at Greensboro has sponsored makercamps for the past several years. It’s a wonderful opportunity for kids to learn by creatively using technology, and for teachers to experiment with new ways to teach.
Just spent a couple really good days with 45+ Career Development Facilitators and Career Development Specialists from public schools around South Carolina who were exploring ways to use project-based learning (PBL) in collaboration with teachers, to help kids learn about career possibilities! A really great group of folks making a difference in the lives of kids.
A PD workshop sponsored by the AWAKE Center of Excellence at the University of South Carolina.
Here is the result of a project of the American Evaluation Association’s Systems in Evaluation Topical Interest Group, Principles for Effective Use of Systems Thinking in Evaluation. The group got together over several months after beginning the project at last November’s AEA conference in Washington DC.
A systems approach to evaluation seeks to apply general principles of systems to understanding the organizations and projects we evaluate. As we have conceived them for this purpose, facets of systems consist of interrelationships between system components, diverse perspectives of actors within systems, system boundaries, and dynamics. By implementing evaluation strategies based in these facets, evaluators using systems approaches can provide valuable understanding of the complex interactions among project components.