Are we assessing what is easy or what matters the most for students?

Are assessing what really matters for future success of our students? This is my continued reflection on Will Richardson’s 9 Elephants in the Class(Room) That Should “Unsettle” Us. If you have not done so already, I encourage you to check out his original article. It is definitely worth the time and would make for a great discussion starter among educators.

“We know that we’re not assessing many of the things that really matter for future success.”

I’ve written about this many times and am almost finished with a new book on the subject of assessment. Yet, I have seen this happen countless times. We assess what is easy to assess instead of what matters most to us. Sometimes people object, noting that we will soon prioritize what is easy to assess over what we value the most. Each time, the advocates assure us that they (and we collectively) will not let that happen. The assessment is added and, over the next year or two, the priorities and values change, aligning with those items that are easy to assess. It happens all the time.

Or, there is the flip side. We argue that what matters most is not easily assessed. As such, we give up on assessment. We don’t measure much of anything. We probably still use grades or something similar, but we downplay it. Maybe there are required measures but again, we dismiss the numbers as not being about what matters to us the most.

The problem is that measurement matters. Or, more specifically, feedback matters. Feedback helps us learn and grow. When it is absent, our growth sometimes slows down or even comes to a halt. Simply documenting what is happening and measuring progress toward a goal can increase motivation for people. This is why I am not ready to give up on assessment. It is just that we want to commit ourselves to not going the easy route, not giving ourselves to that which is most easily measured, not letting the assessment tail wag the “what really matters in education” dog.

I’m convinced that this means embracing assessment as tool that serves greater goals, and giving greater attention to formative feedback and assessment than high stakes assessments. Assessment is most valuable when we use it to determine our progress toward goals that are important to us.

Start with the values an goals. Then you ask an important question. What is the absolute best evidence that someone is learning or growing in this area? Or, what is the best evidence that someone met this goal? Be completely unrealistic. Give your ideal answer even if you know that it is impossible. Once you have that answer, bring it a little closer to reality. What is the next best evidence? Keep doing this until you have something that will work, at least tentatively. Try it out, knowing that it is not perfect. Revisit it often. Critique. Hold on to it, but not too tightly. Be open to new and better ways.

Data scientists might protest. This doesn’t give us the rich and large data sets to analyze. This isn’t carefully analyzed for reliability and validity. This prevents us from generating valuable reports or looking at longitudinal data. It doesn’t let us compare across contexts as well. All of these are worthwhile critiques. Yet, we must respond with other questions. What is the purpose of assessment? Does assessment have inherent value or does its value depend upon how well is serves some other goal or agenda? If the assessment data does not help us measure what matters most for students, what is the point?

The Death of Testing and the Rise of Learning Analytics

I know that it is sad news for some, but more than a few of us have assessed the situation, and the prognosis is not good for our friend (or perhaps the arch enemy to others of us), the test. We might be witnessing the death of testing. Tests are not going away tomorrow or even next year, but their value will fade over the upcoming years until, finally, tests are, once and for all, a thing of the past. At least that is one possible future.

Tests are largely a 20th century educational technology that had no small impact on learning organizations around the world, not to mention teachers and students. They’ve increased anxiety, kept people up all night (often with the assistance of caffeine), and consumed large chunks of people’s formative years.

They’ve also made people lots of money. There are the companies that help create and administer high-stakes tests. There are the-the companies that created those bubble tests and the machines that grade them. There are the test proctoring companies along with the many others that have created high-tech ways to prevent and/or detect cheating on tests. There are the test preparation companies. There are even researchers who’ve done well as consultants, helping people to design robust, valid and reliable tests. Testing is a multi-billion dollar industry.

death of testingGiven this fact, why am I pointing to the death of the test? It is because of the explosion of big data, learning analytics, adaptive learning technology, developments around integrated assessments in games and simulations and much more. These technologies are making and will continue to make it possible to constantly monitor learner progress. Assessment will be embedded in the learning experiences. When you know how a student is making progress and exactly where that student is in terms of reaching a given goal, why do you need a test at the end? The student doesn’t even need to know that it is happening, and the data can be incredibly rich, giving insights and details often not afforded by traditional tests.

Such embedded assessment is the exception today, but not for long. That is why many testing companies and services are moving quickly into the broader assessment space. They realize that their survival depends upon their capacity to integrate in seamless ways with content, learning activities and experiences, simulations and learning environments. This is also why I have been urging educational publishing companies to start investing in feedback and assessment technologies. This is going to critical for their long-term success.

At the same time, I’m not convinced that all testing will die. Some learning communities will continue to use them even if they are technically unnecessary. Tests still play a cultural role in some learning contexts. My son is in martial arts and the “testing day” is an important and valued benchmark in community. Yes, there are plenty of other ways to assess, but the test is part of the experience in this community. The same is true in other learning contexts. Testing is not always used because it is the best way to measure learning. In these situations, testing will likely remain a valued part of the community. In some ways, however, this helps to make my point. Traditional testing is most certainly not the best or most effective means of measuring learning today. As the alternatives expand and the tools and resources for these alternatives become more readily available, tests will start the slow but certain journey to the educational technology cemetery, finding a lot alongside the slide rule and the overhead projector.

It is Time for some MOOC Assessment Makeovers

I’m convinced that is time for some MOOC assessment makeovers. I’m a fan of Coursera, EdX and many others who are investing in creating open and high-quality online content and learning experiences. While the data may show that these providers continue to mainly serve already educated people, we live in an age where lifelong learning is more important than ever, and MOOCs are unquestionably enriching people’s lives and learning. They are not solving all of education’s problems or eradicating problems of access and opportunity, but it is unreasonable to think that they would, especially in the short-term. For MOOCs and open courses and content to increase access and opportunity, we have much work to do to inspire, equip and empower diverse individuals to take advantage of such resources. If you are not informed about the power of possibility of open learning as a tool for personal growth and development, you are not very likely to take advantage of these innovations.

With all this said, it is time to add greater design depth and sophistication to many of the existing MOOCs and open learning experiences. I suggest that we start with some MOOC assessment makeovers. In 2014, I hosted a MOOC on this subject called Learning Beyond Letter Grades, an opportunity to explore what is possible if we climb out of our century-old assessment ruts and re-imagine the role of assessment, especially formative assessment used for increasing student learning, student engagement, and even the ability to transfer what is learned to real world circumstances. Then I taught a short course for Educause members on the same topic in 2015. And, in 2016, I am scheduled to host a series of webinars outlining these possibilities. My mission is simple but substantive. It is to help people discover or rediscover how an assessment makeover of your course or learning experience can produce delightful and positive results for both teacher/facilitator and learner.

This is not prohibitively complex, but it does require us to look beyond many of our lived experiences with assessment and to reconsider assessment plans for our courses and programs. We must let go of the idea that “tough grading” is equal to academic rigor. We will benefit from moving our attention away from high-stakes quizzes and exams, and instead looking at formative and low-stakes feedback and assessment opportunities throughout courses. It means taking the time to learning about distinctions between formative and summative assessment, understanding the limitations of common “grading” practices, weaning ourselves from treating grading and assessment as synonymous, and understanding that frequent and meaningful feedback is one of our greatest friends in the pursuit of quality and engaging learning experiences. As such, this calls for a deeper understanding of things like authentic assessment, portfolio assessment, narrative feedback, checklist and rubric designs (and their benefits and limitations), designing for self-feedback and peer-feedback, the benefits and limitations of standards-based and competency-based assessment models, integrated assessment in educational games and simulations, and how you can blend many (even all) of these into a course or learning experience to create an extreme classroom assessment makeover that pops. This is design work that matters in education.

It means taking the time to learn about distinctions between formative and summative assessment, understanding the limitations of common “grading” practices, weaning ourselves from treating grading and assessment as synonymous, and understanding that frequent and meaningful feedback is one of our greatest friends in the pursuit of quality and engaging learning experiences. As such, this calls for a deeper understanding of things like authentic assessment, portfolio assessment, narrative feedback, checklist and rubric designs (and their benefits and limitations), designing for self-feedback and peer-feedback, the benefits and limitations of standards-based and competency-based assessment models, integrated assessment in educational games and simulations. Then it calls for exploring how you can blend many (even all) of these into a course or learning experience to create an extreme classroom assessment makeover that pops. This is design work that matters in education.

As I review various existing open courses, some of this assessment innovation is happening. There are promising experiments around peer assessment, for example. Yet, the dominant practice is still discussions and quizzes, multiple choice exams and checking off viewing of a video or participation in a given activity. These courses still have value, especially for learners ready and able to add their own feedback systems on top of what the course provides. Yet, it would be huge progress for MOOC providers and participants if we invested more creativity and thought into robust assessment makeovers of these courses. Let’s get to work.

Quantification is Not the Key to Academic Excellence, but THIS Is

I just finished giving an opening keynote on mission-minded approaches to assessment in schools. This was to an audience of educators and administrators in Christian schools, so my charge was to invite them to use a distinct (perhaps unique) lens for thinking about the role of assessment as it relates to their mission. I was the philosophical introduction to a two-day event that would be otherwise applied. My session seemed to go okay, but following a presentation like this, I usually find myself flooded with new thoughts, questions, analogies and illustrations. This time was no different.

In this case, I find myself reflecting on the state of assessment and evaluation in many learning organizations, whether we are talking about assessment and academic performance or evaluation and planning on the organizational level. It brings me back to a study several years back. Years ago when I conducted a study of highly innovative schools. I concluded with a list of ten traits that were consistent among the leaders that I interviewed. My results were not intended to offer any truly generalizable set of traits that lead toward being a leader of an innovative school I was content providing a rich description of the sample that I studied with the hope that there might be inspiration or some potentially transferable insights.

One trait consistent among those interviewed was that these leaders were “addicted to effectiveness data” but I’m beginning to think that I need to adjust that wording. “Data” leads too many to think that I am talking about quantifiable data, but that was actually less common among many of the leaders in these schools. Most of the schools that I examined were charter schools with smaller enrollment numbers and this is certainly an important factor, but the leaders of these schools were not necessarily as interested in data as they were in feedback on how they were doing. In other words, they were addicted to finding out how they were doing and how to make the school even better….a commitment to continuous improvement. This is an important distinction because leading a high-impact and innovative learning organization doesn’t require being a statistician or quantifying everything. It is more about being interested in how well you are doing, facing the “facts”, and doing something about them.

In fact, the leaders of the most innovative schools or learning organizations that I’ve examined over the years seemed just as inclined toward rich stories and narratives; in-depth feedback through conversations, observations and qualitative survey questions. Similarly, if we look at amazing and inspirational educators around the world, we will find many of them are not addicted to numeric benchmarks for students as much as they are interested in mentoring students, helping them to grasp and apply increasingly complex skills and/or concepts as they progress toward excellence. We see this in the classroom, among private tutors, with great athletic coaches, as well as teachers and tutors of those in the performing arts. It is their deep sense of excellence and what progress toward excellence looks like that empowers them to help others achieve great things. How true is this for leaders in our learning organizations as well?

As such, it doesn’t take the quantification of everything to make for a high-impact learning organization or community. It does usually take people (learners, teachers, sometimes both) who have a goal or vision, work toward that goal or vision, crave and use feedback, and adjust accordingly. Sometimes numbers can help with this, but they are rarely essential. In fact, insisting upon the superiority or necessity of quantitative measures is often more about embracing a certain positivistic philosophical stance on education than it is about excellence, growth, or achievement.