5 Strategies for a Balanced Approach to Big Data in Education

We are in the decade a big data. During this second decade in the 21st century, many are grappling with the challenges and opportunity of massive data and the emergence of tools to mine and analyze these data. Within education, this is not new. It started long before No Child Left Behind, with the 20th century growth of modern educational psychology and measurement movements. From that era we saw IQ and aptitude testing, standardized and multiple choice tests, the Bell Curve, and countless efforts in quantifying almost anything about students: achievement, retention, reading proficiency, performance by demographic data, etc. While some of these ideas have a much longer history (China used proficiency exams for civil service already in 2200 B.C.), these certainly gained a new level of attention and importance over the last 150-175 years. Consider how things have changed, as explained by David McAruthur in his 1983 report, Educational Testing and Measurement: A Brief History.

In the mid-1800s, Horace Mann launched the use of written exams in the United States. Based on that, promotion to the next grade was based on performance on these exams. Prior to that, it was oral exams and personal recommendation of the teacher. Testing was not central aspects of American education before this.

Already by the end of the 19th century, because these tests and the perceived negative impact by some, we saw the birth of a new concept, “teaching to the test.” In places like Chicago, there was even a ban on using tests for grade promotion, arguing that the teacher’s recommendation was the better option. The concern was that we would lose much of the “magic” in teaching and learning environments if we used a reductionist approach like just focusing upon students performing well on the tests. Nonetheless, even today there is an entire industry around test preparation and equipping people to perform as best as they can on tests ranging from the SAT to the GRE, LSAT and MCAT.

At this point in history, with more teaching and learning happening partly or fully through technology-enhanced means, we have even more student data to track and analyze. Every action on a device can be captured and reviewed. Similarly, external agencies are requiring the tracking of data about students: data ranging from demographics to attendance, vaccination records, and academic progress.

The advocates for big data point to many affordances. We can identify people at risk before it is too late, sometimes even proactively. We can use data to drive improvements in one or more eras. We can use data to more quickly identify and address problems. We can use data sets to personalize learning, conduct research on best and promising practices, measure progress, and to prevent students from slipping between the cracks (any number of cracks: socially, academically…).

Critics bring plenty of concerns to the conversation as well. Large data sets might inform policy, but while those policies help many, there are always losers with some policies as well. For example, perhaps predictive analytics allow learning organizations to track who is likely to succeed in an upper level math course. As such, they use this to track students on pathways that are more likely to work out for the students. That might exclude a student who is passionate about a STEM field and is willing to work hard enough to overcome the risks and alters that discouraged such a path. Then there are concerns about data privacy, misinterpretation of data, and losing sight of the people…the faces behind the numbers. Empathy and personal connection can be easily disregarded as important part of informing policy. Numbers matter, but so do the people represented in those numbers. There is an important difference between knowing that 80% of a given population is performing below grade level on reading and knowing the stories, challenges, and lived experiences of the people in that 80%.

How do we pursue the benefits of big data while also avoiding some of the limitations or negative elements? There is no easy answer to such a question, but I offer the following ten suggestions.

  1. Persistently challenge the assumption that quantitative data are more important. Get adept at arguing for the benefits of qualitative and quantitative measures. There are plenty of stories and examples from we can pull to make our point.
  2. Learn about the stories of big data success and invest just as much time in learning about big data disasters. Specific cases and examples can help important practice. Push for much higher levels of big data fluency. If we are going to be increasingly data-driven, then we need people who have higher levels of quantitative fluency. Without that, we either relegate important thought and work to a new quantitative tehcnocracy or we risk making flawed, even dangerous, conclusions by misreading the data. Anyone arguing for increased use of data must also be ready to put in the hard work of becoming more literate and fluent.
  3. Beware of the drive to value that which is easier to measure. This starts by persistently bringing the group back to mission, vision, values and goals. If we do not do this, it is easy enough for missions and goals to change just because some goals are more neatly and easily measured than others. Big data is not just about numbers. You can have big quantitative and qualitative data. Be a firm voice in starting with mission. We want to be mission-driven, data-informed, not the other way around.
  4. Consider an equal treatment approach to data usage. If teachers insist on using big data to analyze students, then shouldn’t big data be used to inform policies for teachers as well? What about the same thing for administrators and board members? While this will never be perfect, pushing for an equal treatment approach is likely to nurture empathy and more balanced consideration by decision-makers. For example, consider how many educators insist on the value of frequent tests, quizzes and grading practices that they would vehemently oppose if the same practices were applied to them. Take this an apply it to the state agencies, federal agencies, and politicians as well. Any politicians committed to arguing for big data on a state or federal level in education should be just as open and welcoming to the use of an careful data-driven analysis of their success, record and behaviors in office.
  5. Champion for the most highest possible ethical standards when it comes to data. Sometimes it is so tempting to use data, even for noble purposes, but we have to pass for security reasons or to protect various parties. We must hold the highest possible standard in this regard, even when personal loss is involved.

Big data in education will continue to have affordances and limitations, but these five strategies are at least a good start in promoting a more balanced approach.

 

Posted in analytics, blog | Tagged , ,

About Bernard Bull

Dr. Bernard Bull is an author, host of the MoonshotEdu Show, professor of education, AVP of Academics, and Chief Innovation officer. Some of his books include Missional Moonshots: Insights and Inspiration for Educational Innovation, What Really Matters: Ten Critical Issues in Contemporary Education, The Pedagogy of Faith (editor), and Adventures in Self-Directed Learning. He is passionate about futures in education, educational innovation, alternative education, and nurturing agency and curiosity.