Quantification is Not the Key to Academic Excellence, but THIS Is

I just finished giving an opening keynote on mission-minded approaches to assessment in schools. This was to an audience of educators and administrators in Christian schools, so my charge was to invite them to use a distinct (perhaps unique) lens for thinking about the role of assessment as it relates to their mission. I was the philosophical introduction to a two-day event that would be otherwise applied. My session seemed to go okay, but following a presentation like this, I usually find myself flooded with new thoughts, questions, analogies and illustrations. This time was no different.

In this case, I find myself reflecting on the state of assessment and evaluation in many learning organizations, whether we are talking about assessment and academic performance or evaluation and planning on the organizational level. It brings me back to a study several years back. Years ago when I conducted a study of highly innovative schools. I concluded with a list of ten traits that were consistent among the leaders that I interviewed. My results were not intended to offer any truly generalizable set of traits that lead toward being a leader of an innovative school I was content providing a rich description of the sample that I studied with the hope that there might be inspiration or some potentially transferable insights.

One trait consistent among those interviewed was that these leaders were “addicted to effectiveness data” but I’m beginning to think that I need to adjust that wording. “Data” leads too many to think that I am talking about quantifiable data, but that was actually less common among many of the leaders in these schools. Most of the schools that I examined were charter schools with smaller enrollment numbers and this is certainly an important factor, but the leaders of these schools were not necessarily as interested in data as they were in feedback on how they were doing. In other words, they were addicted to finding out how they were doing and how to make the school even better….a commitment to continuous improvement. This is an important distinction because leading a high-impact and innovative learning organization doesn’t require being a statistician or quantifying everything. It is more about being interested in how well you are doing, facing the “facts”, and doing something about them.

In fact, the leaders of the most innovative schools or learning organizations that I’ve examined over the years seemed just as inclined toward rich stories and narratives; in-depth feedback through conversations, observations and qualitative survey questions. Similarly, if we look at amazing and inspirational educators around the world, we will find many of them are not addicted to numeric benchmarks for students as much as they are interested in mentoring students, helping them to grasp and apply increasingly complex skills and/or concepts as they progress toward excellence. We see this in the classroom, among private tutors, with great athletic coaches, as well as teachers and tutors of those in the performing arts. It is their deep sense of excellence and what progress toward excellence looks like that empowers them to help others achieve great things. How true is this for leaders in our learning organizations as well?

As such, it doesn’t take the quantification of everything to make for a high-impact learning organization or community. It does usually take people (learners, teachers, sometimes both) who have a goal or vision, work toward that goal or vision, crave and use feedback, and adjust accordingly. Sometimes numbers can help with this, but they are rarely essential. In fact, insisting upon the superiority or necessity of quantitative measures is often more about embracing a certain positivistic philosophical stance on education than it is about excellence, growth, or achievement.

Posted in assessment, blog, education | Tagged , , ,

About Bernard Bull

Dr. Bernard Bull is an author, host of the MoonshotEdu Show, professor of education, AVP of Academics, and Chief Innovation officer. Some of his books include Missional Moonshots: Insights and Inspiration for Educational Innovation, What Really Matters: Ten Critical Issues in Contemporary Education, The Pedagogy of Faith (editor), and Adventures in Self-Directed Learning. He is passionate about futures in education, educational innovation, alternative education, and nurturing agency and curiosity.

4 thoughts on “Quantification is Not the Key to Academic Excellence, but THIS Is

  1. DrEvel1

    I hope that I didn’t leave the impression that I advocated an exclusively or even primarily quantitative approach to student assessment; quite the contrary. I’ve been fairly harshly critical of student testing and its cookie-cutter approach to educational assessment. I wouldn’t disagree with anything that you’ve said here about the need for assessment to reflect the true richness of the educational experience.

    My comment was simply a concern that we approach qualitative evaluation as carefully and as knowledgeably as we do quantitative analysis. Systematic training in qualitative research is not widely available, and I’ve found in my academic travels that far too many folks, even experienced researchers, tend to confuse qualitative analysis with simple story-telling and random assembly of instances.This in turn contributes to a widespread distrust of qualitative analysis and a common impression that it’s all just self-serving bloviation. My plea is that we train our assessors as well in the use of rich but rigorous qualitative methods as we do in statistics and quantitative analysis. Good qualitative feedback is critical to the development of educational programs. But unless our qualitative data and analyses are as methodologically sound as our numbers and statistics, they won’t be accorded the respect they ought to have. We owe it to ourselves, our students, and our programs to get beyond simple anecdotes and impressions, to the kind of substantial data that can really help us shape our evolving new educational approaches.

    • Bernard Bull Post author

      Thanks for the follow up. That makes good sense to me. I agree with your comments, especially when we are talking about formal research. At the same time, I do leave room for more informal feedback and formative assessment that does not necessarily align with any formal research methodology. I’ve seen ample examples of learning communities that have more folk or informal feedback and assessment that leads to impressive outcomes. Of course, we also have a body of research that points to as much, especially as we look at research in psychology, sociology and anthropology that deal with learning in contexts ranging from schools to families, religious communities to community education programs, informal learning in the workplace to clinical settings as well as mentoring programs.

  2. DrEvel1

    I would certainly agree that quantitative measures are not the only kinds of data that can meaningfully depict occurrences of academic excellence. But quantitative assessments have one significant advantage; that is, when properly conducted they should exhibit consistent meaning across time and thus allow for changes to be evaluated. That little phrase “properly conducted” of course covers a multitude of sins; we are all intimately familiar with the ways in which numbers can be manipulated and spun into only a vague semblance of the truth. But at least the potential exists, when the data are respected.

    Qualitative data such as those you describe here are inherently more unstable and context-dependent than are most quantitative data. There are well-understood standards for the collection and interpretation of qualitative data, and rigorous qualitative research is in fact significantly more difficult and costly than equally rigorous qualitative research. The problem is that when we think in terms of qualitative feedback, we seldom require the same standards of data collection and interpretation that we would require in a scientific research study. At least when you relax standards regarding quantitative data it is pretty apparent that you have done so, and it’s possible for a good analyst to peer through the BS and worry out some effective results. With qualitative data, we seldom apply the same kinds of standards for documentation, triangulation, and replicability that are more or less built into good statistical analysis, for example. The result is frequently a conflation of what we hear about what’s going on and what we want to hear about what’s going on.

    The basic problem is that the kind of filtering and evaluation of data elements and conclusions that is more or less built into quantitative protocols is really seldom taught to even fairly sophisticated students of evaluation research, let alone practicing administrators beset on all sides by conflicting stories and rampant self-interest disguised as professional concern. All too often, even the most experienced administrator trying to evaluate largely qualitative feedback is likely to spin the data in a favorable direction in ways hard to detect. I would have a great deal more faith in qualitative feedback as a prescription for educational evaluation if in fact those who practice it were systematically trained in how to gather and interpret such data. Not all stories are created equal, and absent understanding the potential problems involved in their interpretation, it is likely that there will be systematic bias in the feedback received and processed by administrators.

    It’s not just a matter of philosophical orientation; it’s a matter of scientific methodology. Unless our methodology is at least reasonably sound – and that means conforming to standards considerably more rigorous than most people understand them to be – we simply won’t get helpful information.

    • Bernard Bull Post author

      Thank you for the comment. I would content that a decision to conform all assessment in schools to quantifiable measures is indeed a philosophical orientation about education. As you know, there is a longstanding discourse about teaching as science and/or art. Much of this depends upon the mission/purpose/vision/values of a given school or learning organization. Consider the distinction between a school that believes student assessments should be largely based on standardized tests versus a Montessori school. This speaks to the mission and values of the schools. Assessments are values-laden technologies. My post/essay challenges the notion that a quantitative approach to assessment is essential to provide a robust and excellent academic experience. I do not see evidence that a literature class must be deeply rooted in quantitative assessments for it to be rich and impactful, for example. Even informal learning throughout one’s life is rich with very helpful feedback that is not based on any quantitate system. Consider language acquisition in early years, the development of many social skills beyond school walls, how many musicians are trained, and even how future medical doctors or field biologists are trained. Consider all the truly formative (in contrast to summative but also in the sense that it is formational) feedback that happens in mentor/ mentee or master / apprentice teaching and learning relationships. All this happens largely apart from formal quantitate measures. In fact, by striving to make every assessment quantifiable, we are likely to lose other elements. Consider G.K. Chesterton’s example of trying to quantify beauty in a brilliant piece of artwork. All the graphs, charts and numeric representations will fail to accurately represent all that makes up that piece of art. I contend that we run into similar challenges in education.

Comments are closed.