Statistical analysis software on a budget

When I left my university position and hung out my independent evaluator shingle almost a decade ago, one of the big shocks was the non-academic prices of many of the software packages I had relied on. In particular, I had used SPSS for more than 30 years on university licenses that were either free or very low cost to me. But the current non-academic license for SPSS runs $99 per month (almost $1200 per year) for the base system, with additional features requiring additional monthly amounts.

In my practice, I occasionally do quantitative data analysis, but not enough to justify spending that kind of cash if I don’t have to! So, I’ve been looking at alternatives and wanted to share what I’ve found.

R

I’ve been intending to learn R one of these days, ever since I first heard about it in the late 200o’s. And I still want to. From what my stats friends tell me, it’s really the best environment for doing data analysis. So, one of these days… But in the meantime:

Real Statistics

Real Statistics is a free (donations appreciated) Excel add-on with extensive documentation, developed and maintained by Dr. Charles Zaiontz, a researcher and research administrator who has taught at the University of South Florida and other institutions. It appears that Real Statistics is his labor of love—literally, since he credits his wife’s research as the inspiration for the Excel resource pack he created. To use Real Stats, you download the package, activate Excel’s Solver add-in, and then install the Real Stats resource. From within Excel, you invoke the Real Stats popup menu, which provides access to all of Real Statistics’ functions. Indicate the location in the worksheet of the data you want to analyze, the location where you want the output, select whatever options you need, and a moment later, there’s your analysis. I haven’t tested the output extensively, but have compared the procedures I use most often to that of other statistics programs and have gotten consistent results.

As the Real Statistics website points out, all of the statistical functions actually exist in Excel, so the resource pack is not required to do the supported analyses. However, Real Statistics provides the formulas and output formats needed, so you don’t have to figure them out for yourself. In addition, the Real Statistics website is an excellent resource for understanding the statistical concepts and procedures involved. In fact, the Real Statistics website is a terrific informational resource, even if you don’t use the add-in.

For me, Real Statistics turns Excel into a highly capable and efficient statistical analysis tool.

Colectica

A major barrier to using Excel for statistical analysis is fact that it does not support the importation of SPSS and SAS files, since many data files are provided in one of these formats. In addition, Excel lacks data documentation features that are included in other data file formats. The data definitions that are included in SPSS’s .SAV data files are really helpful, since they provide variable and data labels, coding information, etc. I recently received a .SAV file of data to analyze, and for various reasons needed to covert the data to Excel format. Although I had tried this before, I decided to google “how to import SPSS files into Excel” one more time, without expecting to find anything useful. The search results included a link to an Excel add-in called Colectica, which turns out to be a great find.

This little gem of an add-in provided the features I needed to address both importing SPSS files and documenting variable information. Colectica’s main purpose is to document Excel data files—so it stores the same kinds of data definitions that are part of SPSS data files, directly in Excel .xlsx files. It will even assist in building a coding scheme from coded data. Previously, I used Excel’s comment feature to store variable label and coding information, but this wasn’t always completely satisfactory or convenient, so Colectica’s data documentation features are worth getting for their own sake. The paid version of Colectica adds the ability to import SPSS, SAS, and Stata data files directly into Excel, including both data and data definitions. The free version includes the data documentation features. To activate the ability to import SPSS, SAS, and Stata files, an annual license costs $12 per year, or you can purchase a perpetual license for $49.

Is Excel a credible alternative to SPSS?

I don’t know if a hard-core stats geek would think so, but since I started using the combination of Excel, Real Statistics, and Colectica, I haven’t yet found a need for anything else. So, I’m happy with this.

Other alternatives

Before settling on Excel with the Real Statistics and Colectica add-ins, I found several other helpful alternatives.

PSPP

The GNU PSPP application, developed by the Free Software Foundation is a work-alike that can read and write SPSS data files and whose syntax is similar to SPSS, at least for the features it supports. Donations are requested, but it is free to download and use. If you know how to use SPSS, you’ll be able to use PSPP very quickly. It doesn’t have all of the features of SPSS and the user interface is a little quirky sometimes, but the price is right, and unless you need advanced procedures, it’s an alternative to consider.

You can import Excel files into PSPP, but, at least at present, it’s not possible to export to an Excel file, which I find to be its most frustrating limitation (although, with Colectica, that’s less of a problem than it has been). I haven’t done extensive testing, but have compared the output of some SPSS and PSPP procedures and found that they were the same. So, to my knowledge, PSPP statistical output is accurate.

Websites

There are websites that offer almost every conceivable statistical test. Typically, you either download an app that runs locally on your computer, or upload your data to the website and run the procedure there. In the case of downloaded apps, I always check the files with antivirus software before opening or installing them. So far, I haven’t had any problems with viruses, trojans, etc. but it pays to be security-conscious. Likewise, when uploading data to a statistics website, I always make certain that there’s nothing identifiable in the data, just a list of numbers, none of which are identifiers.

So, I’ve found that there’s a universe of reasonably efficient and accurate statistical software alternatives to the high-priced commercial analytics packages, which is good to know. But, yeah, one of these days I’m still going to get around to learning R…

Thoughts about the 2018 AEA conference

American Evaluation Association conference in Cleveland, October 31 – November 3, 2018

My homebody tendencies were whispering that I should stay home from the AEA conference this year, as they often do when I’m confronting the reality of a trip. But in the end I went, and as usual, it was greatly worthwhile.

The conference theme was “speaking truth to power,” which seemed timely, and also prompted a lot of reflection. Each of the terms in this phrase needs to be unpacked; even the construction of the phrase requires a critical look.

My conference mission included looking for new ideas and approaches to collecting richly detailed data from program participants, and to using evaluation to support positive change. The conference offered many opportunities in this regard. It was well worth the investment!

Root cause analysis/fishbone analysis

One session, “Critiquing adult participation in education through root cause analysis,” given by Margaret Patterson, introduced me to root cause analysis, also called fishbone analysis. The project she described involved asking small groups of participants to articulate the barriers they experienced to pursuing adult education opportunities. The “fishbone” diagrams that resulted from these discussions provided insights into her participants’ perceptions. I think this technique can be a really powerful way to give people opportunities for significant input into decisions that affect them. I look forward to being able to experiment with this approach.

(As an aside, the presenter also mentioned that the participants had discussed ideas for addressing some of the identified issues. Some of these ideas sounded really creative, and they set me to thinking that many projects that are intended to address social problems are predicated on the idea that a social service organization or agency will identify needed services and then provide them. But I wonder if sometimes it would be better to support communities to identify their own solutions and then support the communities to develop their own capacities, rather than applying someone else’s solutions.)

Unlearning

Another provocative framework was introduced as “unlearning,” in a workshop by Chera Reid, Anna Cruz, Maria Rosario Jackson, and one other presenter whose name was not on the program (and, unfortunately, I didn’t write down). The workshop was called “Accepting the invitation to unlearn: Insights from seekers of systems change.” They encouraged us to think about and experience our work with new eyes. One of the presenters led us through a Feldenkrais exercise where we were given the opportunity to connect our minds with our bodily sensations of breath and movement. I like the “unlearning” frame, in part because I agree with the notion that a lot of the things we “know” may not actually be true, or at least, perhaps, not all there is to know. I suspect I have a lot of unlearning to do…

Photovoice

Immediately following this session, a group of grad students from UNC-Greensboro presented a photovoice project in which they shared photographs that symbolized the conference theme.

Systems in Evaluation

My professional home in AEA is the Systems in Evaluation Topical Interest Group (SETIG). My master’s degree was in Antioch University Seattle’s Whole Systems Design program, which allowed me to delve into the application of systems concepts, often in organizational contexts. Although I didn’t study (or even know about) evaluation at that juncture, the WSD program provided me with fundamental concepts that I’ve employed in many contexts since. It was several years before I realized that there was a group of folks in AEA who shared my interests, and for the past few years, I’ve been fortunate to have found these people.

Over the past couple of years, a number of Systems TIG folks have been working on the articulation of a set of systems principles that may help us to operationalize and apply systems concepts to evaluation. The group has currently identified four: system boundaries, perspectives, interconnection, and dynamism. A draft statement, now being revised, was circulated last summer. The TIG held a think tank session in which the attendees considered specific applications of the principles. Next will be another round of refinement of the principles statement. (I shared the current draft statement in a blog post last summer.) Beverly Parsons suggested two additional principles: holism–a system is not its parts; and power source–systems have external sources of power or animation.

Michael Quinn Patton has been talking about principle-focused evaluation lately, and the general concept of principles seems to be quite a useful frame for projects that are too varied or complex to be easily accommodated by logic models or theories of change. This seems well-aligned with the effort to develop principles of systems that can guide evaluation efforts.

What’s next?

I want/need to know more: There were several mentions of a technique called “outcomes harvesting” in a variety of sessions this year. I may have heard about this in previous years, but it didn’t penetrate my awareness. In any case, this year it seemed to be everywhere. So, I’m going to be on the lookout for an introduction to this.

There’s lots more to talk about, and I could go on all night, but I won’t. A few quick items, though: I attended a wonderful closing session consisting of skits to help us reflect on evaluation dilemmas. (Presenters included my UNCG colleague Ayesha Boyce, along with four others, and audience participants!) AEA is a wonderfully diverse group in many ways, which I greatly appreciate. I find my preconceptions challenged and always come away from the conference with new ways to think about evaluation and organizational change, and with strengthened connections with colleagues and friends.

Evaluation of adaptive versus technical challenges

Harvard leadership theorist Ronald Heifetz and his colleagues have advanced the idea that two kinds of challenges faced by organizations are technical and adaptive. A technical challenge is a problem that requires some kind of organizational tweak, the application of a well-known technique, or the like. A business might decide to implement a new machine on a production line, or a school might choose a textbook for sixth-grade English. It’s a technical problem requiring a technical solution that does not require a new way of doing business. (Here’s a video in which Heifetz discusses these two kinds of challenges.)

Adaptive challenges, however, require that people in the organization think and behave in new ways. A school might adopt a strategy in which teachers form professional learning communities, and work together to identify and implement their own approaches to helping their students. A business might determine that their employees need to create close partnerships with their customers in order to exploit the potential of emerging technologies. In other words, adaptive challenges require that organizations undergo fundamental change.

One reason that organizations are unsuccessful in addressing change is that they sometimes attempt to apply technical solutions to adaptive challenges. A school might adopt a new curriculum, or textbook, or testing system, when what is really needed is to support teachers to engage with their students in different ways. In such situations, a technical fix will not provide lasting, sustainable change.

This has implications for evaluators. A project that is designed to address an adaptive challenge should include an evaluation design that dives deeply into the organizational change processes necessary to the success of the effort. Measuring surface behaviors will be unlikely to provide the insights needed to understand how the organization, and the people within it, change and grow to address the challenge. Evaluating adaptive change initiatives is likely to require a systems-aware approach to evaluation design, and both qualitative and quantitative measures of internal processes, outcomes, and project impacts. Adaptive challenges often involve situations where solutions are not well-defined at the outset–so an appropriate evaluation strategy must be able to accommodate solutions that emerge from trial and error.

Validation of research instruments

Lately I’ve been working with colleagues on a validation study of a classroom observation instrument  they developed for use in undergraduate mathematics classes. While thinking about the basic question, what is validity? I came across a seminal article on the subject by Samuel Messick (1995 “Validity of psychological assessment,” American Psychologist, 50:9, 741-9). He said, “Validity is not a property of the test or assessment as such, but rather of the meaning of the test scores” (p. 741). In other words, validity is established for uses of instruments and interpretation of results. It’s inappropriate to say that a survey or test has been validated; rather, you have to specify how the instrument will be used and how the results will be interpreted.

A good example of this is a hypothetical validation study of a foot-long ruler. The relevant validation question might be: Does this instrument accurately measure distances? But this question is not a focused enough. Our validation study would find that the ruler is a valid instrument for measuring distances between, say 1/16 of an inch and 12 inches, along a more-or-less straight line/flat surface, to an accuracy of perhaps 1/32″. So, within those parameters, the ruler is a “valid” instrument. Or rather, it will yield valid measurements within those constraints. On the other hand, it would not yield accurate or useful results if you wanted to measure distances of more than a few feet, or on a curved or irregular surface, or, say, the thickness of a piece of wire. So, for these uses it is not a valid instrument.

There is a tendency in social sciences to use measurements for all kinds of things for which they are not valid measures. For example, standardized tests of student content knowledge may (or may not) be valid measures of the educational achievement of groups of students. Because of the technical qualities of the tests, those same measures may not be valid measures of individual students, or of the competence of their teachers, or of the quality of their schools. Researchers and evaluators should not ask, is this a validated instrument? Rather, the question is, is this instrument valid for my proposed purpose?

New Systems Thinking in Evaluation document

Here is the result of a project of the American Evaluation Association’s Systems in Evaluation Topical Interest Group, Principles for Effective Use of Systems Thinking in Evaluation. The group got together over several months after beginning the project at last November’s AEA conference in Washington DC.

A systems approach to evaluation seeks to apply general principles of systems to understanding the organizations and projects we evaluate. As we have conceived them for this purpose, facets of systems consist of interrelationships between system components, diverse perspectives of actors within systems, system boundaries, and dynamics. By implementing evaluation strategies based in these facets, evaluators using systems approaches can provide valuable understanding of the complex interactions among project components.