Statistical analysis software on a budget

When I left my university position and hung out my independent evaluator shingle almost a decade ago, one of the big shocks was the non-academic prices of many of the software packages I had relied on. In particular, I had used SPSS for more than 30 years on university licenses that were either free or very low cost to me. But the current non-academic license for SPSS runs $99 per month (almost $1200 per year) for the base system, with additional features requiring additional monthly amounts.

In my practice, I occasionally do quantitative data analysis, but not enough to justify spending that kind of cash if I don’t have to! So, I’ve been looking at alternatives and wanted to share what I’ve found.

R

I’ve been intending to learn R one of these days, ever since I first heard about it in the late 200o’s. And I still want to. From what my stats friends tell me, it’s really the best environment for doing data analysis. So, one of these days… But in the meantime:

Real Statistics

Real Statistics is a free (donations appreciated) Excel add-on with extensive documentation, developed and maintained by Dr. Charles Zaiontz, a researcher and research administrator who has taught at the University of South Florida and other institutions. It appears that Real Statistics is his labor of love—literally, since he credits his wife’s research as the inspiration for the Excel resource pack he created. To use Real Stats, you download the package, activate Excel’s Solver add-in, and then install the Real Stats resource. From within Excel, you invoke the Real Stats popup menu, which provides access to all of Real Statistics’ functions. Indicate the location in the worksheet of the data you want to analyze, the location where you want the output, select whatever options you need, and a moment later, there’s your analysis. I haven’t tested the output extensively, but have compared the procedures I use most often to that of other statistics programs and have gotten consistent results.

As the Real Statistics website points out, all of the statistical functions actually exist in Excel, so the resource pack is not required to do the supported analyses. However, Real Statistics provides the formulas and output formats needed, so you don’t have to figure them out for yourself. In addition, the Real Statistics website is an excellent resource for understanding the statistical concepts and procedures involved. In fact, the Real Statistics website is a terrific informational resource, even if you don’t use the add-in.

For me, Real Statistics turns Excel into a highly capable and efficient statistical analysis tool.

Colectica

A major barrier to using Excel for statistical analysis is fact that it does not support the importation of SPSS and SAS files, since many data files are provided in one of these formats. In addition, Excel lacks data documentation features that are included in other data file formats. The data definitions that are included in SPSS’s .SAV data files are really helpful, since they provide variable and data labels, coding information, etc. I recently received a .SAV file of data to analyze, and for various reasons needed to covert the data to Excel format. Although I had tried this before, I decided to google “how to import SPSS files into Excel” one more time, without expecting to find anything useful. The search results included a link to an Excel add-in called Colectica, which turns out to be a great find.

This little gem of an add-in provided the features I needed to address both importing SPSS files and documenting variable information. Colectica’s main purpose is to document Excel data files—so it stores the same kinds of data definitions that are part of SPSS data files, directly in Excel .xlsx files. It will even assist in building a coding scheme from coded data. Previously, I used Excel’s comment feature to store variable label and coding information, but this wasn’t always completely satisfactory or convenient, so Colectica’s data documentation features are worth getting for their own sake. The paid version of Colectica adds the ability to import SPSS, SAS, and Stata data files directly into Excel, including both data and data definitions. The free version includes the data documentation features. To activate the ability to import SPSS, SAS, and Stata files, an annual license costs $12 per year, or you can purchase a perpetual license for $49.

Is Excel a credible alternative to SPSS?

I don’t know if a hard-core stats geek would think so, but since I started using the combination of Excel, Real Statistics, and Colectica, I haven’t yet found a need for anything else. So, I’m happy with this.

Other alternatives

Before settling on Excel with the Real Statistics and Colectica add-ins, I found several other helpful alternatives.

PSPP

The GNU PSPP application, developed by the Free Software Foundation is a work-alike that can read and write SPSS data files and whose syntax is similar to SPSS, at least for the features it supports. Donations are requested, but it is free to download and use. If you know how to use SPSS, you’ll be able to use PSPP very quickly. It doesn’t have all of the features of SPSS and the user interface is a little quirky sometimes, but the price is right, and unless you need advanced procedures, it’s an alternative to consider.

You can import Excel files into PSPP, but, at least at present, it’s not possible to export to an Excel file, which I find to be its most frustrating limitation (although, with Colectica, that’s less of a problem than it has been). I haven’t done extensive testing, but have compared the output of some SPSS and PSPP procedures and found that they were the same. So, to my knowledge, PSPP statistical output is accurate.

Websites

There are websites that offer almost every conceivable statistical test. Typically, you either download an app that runs locally on your computer, or upload your data to the website and run the procedure there. In the case of downloaded apps, I always check the files with antivirus software before opening or installing them. So far, I haven’t had any problems with viruses, trojans, etc. but it pays to be security-conscious. Likewise, when uploading data to a statistics website, I always make certain that there’s nothing identifiable in the data, just a list of numbers, none of which are identifiers.

So, I’ve found that there’s a universe of reasonably efficient and accurate statistical software alternatives to the high-priced commercial analytics packages, which is good to know. But, yeah, one of these days I’m still going to get around to learning R…

The blame game

“We don’t blame dentists when we don’t brush properly and we get a cavity. So why do we blame teachers when kids don’t pass because they don’t study?” (A meme recently posted on Facebook.)

I agree with the main point of this meme: Teachers get all kinds of undeserved blame for situations they have little control over.

But, taking a step back, I got to thinking that we spend far too much time trying to figure out who’s to blame in all kinds of situations. If we put as much energy into finding solutions as we often do figuring out who’s at fault, we could make great progress. (Actually, most of the blame game is really about absolving ourselves of responsibility. But, we don’t have to accept responsibility for a problem in order to want to do something about it!)

Here’s a truth: Any system is perfectly aligned to get the results it gets. If we are dealing with a system problem, we’re unlikely to solve it without addressing system issues. When large numbers of people are not succeeding in a system, it’s unlikely that it’s because there is something wrong with them!

I think about some of my own relatives who struggled in school because of undiagnosed learning disabilities and other challenges, and who were thought to be uncooperative, when in fact, they were simply suffering.

This meme kind of assumes that the blame for failure has to be the fault of either teachers or students. Either way, unless we understand why some students don’t study, my guess is that we won’t fix the problem. The only thing the blame game accomplishes is to allow the rest of us to walk away from thinking compassionately and creatively about solutions.

Thoughts about the 2018 AEA conference

American Evaluation Association conference in Cleveland, October 31 – November 3, 2018

My homebody tendencies were whispering that I should stay home from the AEA conference this year, as they often do when I’m confronting the reality of a trip. But in the end I went, and as usual, it was greatly worthwhile.

The conference theme was “speaking truth to power,” which seemed timely, and also prompted a lot of reflection. Each of the terms in this phrase needs to be unpacked; even the construction of the phrase requires a critical look.

My conference mission included looking for new ideas and approaches to collecting richly detailed data from program participants, and to using evaluation to support positive change. The conference offered many opportunities in this regard. It was well worth the investment!

Root cause analysis/fishbone analysis

One session, “Critiquing adult participation in education through root cause analysis,” given by Margaret Patterson, introduced me to root cause analysis, also called fishbone analysis. The project she described involved asking small groups of participants to articulate the barriers they experienced to pursuing adult education opportunities. The “fishbone” diagrams that resulted from these discussions provided insights into her participants’ perceptions. I think this technique can be a really powerful way to give people opportunities for significant input into decisions that affect them. I look forward to being able to experiment with this approach.

(As an aside, the presenter also mentioned that the participants had discussed ideas for addressing some of the identified issues. Some of these ideas sounded really creative, and they set me to thinking that many projects that are intended to address social problems are predicated on the idea that a social service organization or agency will identify needed services and then provide them. But I wonder if sometimes it would be better to support communities to identify their own solutions and then support the communities to develop their own capacities, rather than applying someone else’s solutions.)

Unlearning

Another provocative framework was introduced as “unlearning,” in a workshop by Chera Reid, Anna Cruz, Maria Rosario Jackson, and one other presenter whose name was not on the program (and, unfortunately, I didn’t write down). The workshop was called “Accepting the invitation to unlearn: Insights from seekers of systems change.” They encouraged us to think about and experience our work with new eyes. One of the presenters led us through a Feldenkrais exercise where we were given the opportunity to connect our minds with our bodily sensations of breath and movement. I like the “unlearning” frame, in part because I agree with the notion that a lot of the things we “know” may not actually be true, or at least, perhaps, not all there is to know. I suspect I have a lot of unlearning to do…

Photovoice

Immediately following this session, a group of grad students from UNC-Greensboro presented a photovoice project in which they shared photographs that symbolized the conference theme.

Systems in Evaluation

My professional home in AEA is the Systems in Evaluation Topical Interest Group (SETIG). My master’s degree was in Antioch University Seattle’s Whole Systems Design program, which allowed me to delve into the application of systems concepts, often in organizational contexts. Although I didn’t study (or even know about) evaluation at that juncture, the WSD program provided me with fundamental concepts that I’ve employed in many contexts since. It was several years before I realized that there was a group of folks in AEA who shared my interests, and for the past few years, I’ve been fortunate to have found these people.

Over the past couple of years, a number of Systems TIG folks have been working on the articulation of a set of systems principles that may help us to operationalize and apply systems concepts to evaluation. The group has currently identified four: system boundaries, perspectives, interconnection, and dynamism. A draft statement, now being revised, was circulated last summer. The TIG held a think tank session in which the attendees considered specific applications of the principles. Next will be another round of refinement of the principles statement. (I shared the current draft statement in a blog post last summer.) Beverly Parsons suggested two additional principles: holism–a system is not its parts; and power source–systems have external sources of power or animation.

Michael Quinn Patton has been talking about principle-focused evaluation lately, and the general concept of principles seems to be quite a useful frame for projects that are too varied or complex to be easily accommodated by logic models or theories of change. This seems well-aligned with the effort to develop principles of systems that can guide evaluation efforts.

What’s next?

I want/need to know more: There were several mentions of a technique called “outcomes harvesting” in a variety of sessions this year. I may have heard about this in previous years, but it didn’t penetrate my awareness. In any case, this year it seemed to be everywhere. So, I’m going to be on the lookout for an introduction to this.

There’s lots more to talk about, and I could go on all night, but I won’t. A few quick items, though: I attended a wonderful closing session consisting of skits to help us reflect on evaluation dilemmas. (Presenters included my UNCG colleague Ayesha Boyce, along with four others, and audience participants!) AEA is a wonderfully diverse group in many ways, which I greatly appreciate. I find my preconceptions challenged and always come away from the conference with new ways to think about evaluation and organizational change, and with strengthened connections with colleagues and friends.

Learning without being taught

I have long been an admirer of Victor Wooten’s bass playing, ever since my brother-in-law gave me a Bela Fleck and the Flecktones CD, many years ago. Wooten is a brilliant musician. (Well, actually, so is my brother-in-law!)

Last night as I was randomly searching through TED talks on YouTube, I noticed that he had done one entitled “Music as a Language.” I watched it and was completely enthralled.

Wooten points out that we learn language without being “taught,” and how he learned to play music by growing up in a music-rich environment. His focus in this talk is about learning to express the music that is within each of us, and I think his insights could easily apply to life and learning much more broadly.

The thought emerged for me: What would it be like if schools were organized based on this insight? In a way, that’s the philosophy of makerspaces and language immersion programs. What would it be like if we stopped making children learn, and started letting them learn instead?

Here’s a video of Wooten’s TED talk.

Grades, indicators, and learning

Here’s an interesting (to me) reflective piece by author Dan Houck about grading, from Inside Higher Education. I think the writer has expressed the misgivings many educators feel about distilling the complex phenomenon of student learning down to a single letter or number.

I’m reminded of a conversation I once had with Peter Ewell, when I was working at a small liberal arts college in the Midwest and he was helping us think about related issues. We got onto the subject of indicators, of which grading is a good example–measures that stand in for more complex phenomena which are not easily measured. Dr. Ewell pointed out that indicators are reasonably useful until consequences are attached to them. When that happens, powerful motivations arise to distort the inputs to the measures, or to the measures themselves. We start to pay more attention to the indicator than to the underlying phenomenon it is meant to illuminate.

In the case of grading, we start to pay attention to the letter or number, rather than to the learning or performance it is supposed to indicate, with all kinds of unintended consequences. As an example, there have been numerous stories in the media about the Hope Scholarship’s requirement that students maintain a “B” average, and scholarship students’ choice of less challenging coursework to ensure that they maintain their eligibility. At least in some cases, rather than taking the hard, but possibly more rewarding, courses, paying attention to the indicator has the effect of increasing risk-aversion among some students.

In a way, though, the question of how–or whether–to grade students, should be accompanied by the more central question of how to engage students in meaningful and valuable learning, as I think Mr. Houck articulates in his reflections. What will help the students find their own reasons to want to squeeze every drop of value out of their school experiences? How best can their teachers help them do that? How can professors, schools and colleges design learning environments where that is at the top of the agenda?

Evaluation of adaptive versus technical challenges

Harvard leadership theorist Ronald Heifetz and his colleagues have advanced the idea that two kinds of challenges faced by organizations are technical and adaptive. A technical challenge is a problem that requires some kind of organizational tweak, the application of a well-known technique, or the like. A business might decide to implement a new machine on a production line, or a school might choose a textbook for sixth-grade English. It’s a technical problem requiring a technical solution that does not require a new way of doing business. (Here’s a video in which Heifetz discusses these two kinds of challenges.)

Adaptive challenges, however, require that people in the organization think and behave in new ways. A school might adopt a strategy in which teachers form professional learning communities, and work together to identify and implement their own approaches to helping their students. A business might determine that their employees need to create close partnerships with their customers in order to exploit the potential of emerging technologies. In other words, adaptive challenges require that organizations undergo fundamental change.

One reason that organizations are unsuccessful in addressing change is that they sometimes attempt to apply technical solutions to adaptive challenges. A school might adopt a new curriculum, or textbook, or testing system, when what is really needed is to support teachers to engage with their students in different ways. In such situations, a technical fix will not provide lasting, sustainable change.

This has implications for evaluators. A project that is designed to address an adaptive challenge should include an evaluation design that dives deeply into the organizational change processes necessary to the success of the effort. Measuring surface behaviors will be unlikely to provide the insights needed to understand how the organization, and the people within it, change and grow to address the challenge. Evaluating adaptive change initiatives is likely to require a systems-aware approach to evaluation design, and both qualitative and quantitative measures of internal processes, outcomes, and project impacts. Adaptive challenges often involve situations where solutions are not well-defined at the outset–so an appropriate evaluation strategy must be able to accommodate solutions that emerge from trial and error.

Validation of research instruments

Lately I’ve been working with colleagues on a validation study of a classroom observation instrument  they developed for use in undergraduate mathematics classes. While thinking about the basic question, what is validity? I came across a seminal article on the subject by Samuel Messick (1995 “Validity of psychological assessment,” American Psychologist, 50:9, 741-9). He said, “Validity is not a property of the test or assessment as such, but rather of the meaning of the test scores” (p. 741). In other words, validity is established for uses of instruments and interpretation of results. It’s inappropriate to say that a survey or test has been validated; rather, you have to specify how the instrument will be used and how the results will be interpreted.

A good example of this is a hypothetical validation study of a foot-long ruler. The relevant validation question might be: Does this instrument accurately measure distances? But this question is not a focused enough. Our validation study would find that the ruler is a valid instrument for measuring distances between, say 1/16 of an inch and 12 inches, along a more-or-less straight line/flat surface, to an accuracy of perhaps 1/32″. So, within those parameters, the ruler is a “valid” instrument. Or rather, it will yield valid measurements within those constraints. On the other hand, it would not yield accurate or useful results if you wanted to measure distances of more than a few feet, or on a curved or irregular surface, or, say, the thickness of a piece of wire. So, for these uses it is not a valid instrument.

There is a tendency in social sciences to use measurements for all kinds of things for which they are not valid measures. For example, standardized tests of student content knowledge may (or may not) be valid measures of the educational achievement of groups of students. Because of the technical qualities of the tests, those same measures may not be valid measures of individual students, or of the competence of their teachers, or of the quality of their schools. Researchers and evaluators should not ask, is this a validated instrument? Rather, the question is, is this instrument valid for my proposed purpose?

How to make sure that PBL is applied well in schools

Helpful perspective from Mind/Shift at KQED by Thom Markham, PBL Global. This article responds to some current critiques of PBL, but focuses mostly on the way that educators’ perspectives need to change in order to implement PBL successfully. As he says, the point of PBL is not to sneakily manipulate students into learning that is focused on the standards, but to go far deeper into real learning that results in high levels of self-directed learning. This is not a long piece, but has a lot of good ideas.

Project-Based Learning (PBL) resources

Project-based learning (PBL) is an approach to teaching and learning that engages students in ways that traditional classrooms often don’t. One group that provides resources to help teachers create and implement PBL is the Buck Institute for Education. They have created a framework called “Gold-Standard” PBL, and many of the resources they offer are free on their website.

Also, Thom Markham is a PBL advocate, leadership coach, and prolific tweeter. His organization is PBL Global.

The CSTP Teacher Leadership Framework

I’ve been working with some mathematics educators recently on a grant proposal to develop teacher leadership in mathematics and found an organization that’s doing great work in teacher leadership, the Center for Strengthening the Teaching Profession (CSTP). They’ve developed a set of tools to help implement and evaluate their Teacher Leadership Framework. If this project is funded, I hope we’ll be able to use this framework to help understand the teacher leadership development process.

I’d be interested to hear from others who are using this or other frameworks to develop or study teacher leadership.