The Promise, Peril, and Possibility of Data, Analytics, and AI in Higher Education (3 of 7): Student Learning

This is the third installment of a seven-part exploration of data, analytics, and AI in higher education. In the first article, I set the groundwork for the series. I introduced the framework from which I intended to write, identifying what I consider to be six distinct aspects of data and AI’s influence in higher education. It simply follows the course of a learner’s relationship with a learning community. As such, the prior article looked at the pre-enrollment relationship, considering how data and AI are changing and will continue to change the way in which learners find, select, and build a relationship with a learning community (whether it be a formal college/University, informal learning, open learning resources or communities, people and experiences that foster learner, or a combination of these). Now, in this third article, I focus my attention upon how data, analytics, and AI will continue to influence the actual learning experience.

I have not intention of providing an exhaustive exploration of the topic. Of course, that would require countless volumes. Instead, what I offer at this point are a couple main observation about the challenges and opportunities that come when we venture into questions about how data, analytics, and/or AI might influence the learning experience.

The Adaptive Learning Revolution Will Really Be a Revolution

For those who still largely conceptualize the bones of the college learning experience as a collection of classes that culminate in the issuing of a credential, the world of data and analytics gives us even more opportunity to reconsider that construct. 

“Learning analytics” is a phrase that we often use in reference to all the data that we can collect and analyze to determine the extent to which students are learning, or the extent to which they exhibit behaviors that someone deems important for student success. Related to this, “adaptive learning” is a system that uses these data points to adjust the learning or learner experience with the hope of some improved outcome. On the K-12 level, we see rudimentary examples with software like Aleks Math or Dreambox. The software includes a hierarchy of levels of mastery. It introduces learners to content, challenges, and lessons; and the learner interacts with this content. All along, the learner’s performance is being assessed, and the software is adjusting accordingly. Each person is taken on a slightly (or sometimes significantly) different learning journey based upon prior knowledge and performance while using the software.

Most of the examples that are prominent today are early experiments with adaptive learning, but the truly interesting stuff is still on its way. While some critique such software as too mechanistic or representative of a simple and behaviorist approach to learning and mastery, that will change. It is already beginning to change. I commend the ingenuity and creativity of early developers of adaptive learning software for math and language acquisition, for example, but most of the work at this point lacks depth. It has yet to dip into the incredible pool of research and insight that now exists about the nature of learning. The companies and learning communities that demonstrate the forethought and wisdom to soak their adaptive learning software into this larger body of knowledge may well end up being some of the most powerful influencers of learning in the 21st and 22nd century.

You might recall the buzz when Thomas Frey made the prediction that the largest Internet company of 2030 will be an online school. I’ll build on his idea to say that it will be a organization that taps into the combined power of AI and the best of learning science research. It will boast of performance increases and learning outcomes that make some dominant teaching practices look like the tools of prehistoric cavemen.

In this new and emerging learning context, we are not talking about a collection of classes that lead to a credential. We are looking at constantly monitored and documented ebbs and flows in learning. We are considering measurements of mastery with regard to discrete skills and knowledge, the nuanced changes in learner motivation, mindset, traits, and disposition as well. As the research develops, we will also see far more complex calculations that look at patterns of thinking and behavior that are predictive of success or failure in various life, work, and real world contexts. Not only that, these new contexts will make massive strides in providing greater insight about knowledge transfer challenges, the extent to which someone’s learning in a classroom or on a computer adequately transfers to various real-world or novel contexts.

Similarly, measurement of learning will not just be about a moment in time. These emerging technologies will make it possible to monitor what is retained, lost, refined, or re-purposed over years or decades. As such, the monitoring of learning for someone like a surgeon will not stop when the MD is earned (if the construct of an MD persists into the 22nd century). Monitoring of learning and performance will continue and be tracked throughout one’s career.

There is much caution when it comes to such measurements. They are often crude and disregard important nuances and factors. Measurements become values-laden clubs that beat people and communities into submission. Over time, they become so accepted and commonplace that many think you are a social deviant to question their propriety.

Education and learning is rarely just about outcomes. As I so often write, education is always values-laden, which is the source of most great debates in the past and present when it comes to the what, why, and how of education in different contexts. Education isn’t just about the future as well. This AI future that I am describing is most likely inevitable, but we are still wise to not forget the significance of the hidden curriculum. We are wise to consider how these new technology-driven learning contexts shape how we think and our sense of humanity. We are wise to discuss and consider the values-laden nature of each new iteration. The most progressive transhumanists among us might dismiss these warnings as nostalgic hogwash. I do not. I hope that many who read this do not consider it hogwash, especially those who will help bring about the AI in education revolution.

Instructor Resistance 

In such a future, holding on to sentimentality will not be a successful resistance. We must think deeply about what it means to use, live, and teach in a world of big data and artificial intelligence. What do teachers do best? Or, what do we want to be provided by other humans instead of from technology, even if empirical data might suggest that the technology achieves the same or a better result?

Clayton Christiansen’s theory of disruptive innovation suggests that innovations often take root by serving a population largely overlooked, under-served, or even disregarded by others; and they do so with what might initially be an inferior product. Over time, the product improves and gradually captures a larger market until it has the potential to disrupt or displace what was happening before. This is already occurrring when it comes to data-enhanced instruction and learning analytics software. These are supplements to traditional teachers. They are also being used as replacements. As the outcomes of these substitutes or replacements increase, a growing number of people will choose to skip the traditional teacher and classroom. We can lament the loss. We can grieve for what this means about the deeply human and personal aspect of education. It will still happen.

Research from the psychology of attention is being purposed to track facial expressions, heart rate, and more. These too will eventually be part of the data set used in adaptive learning software. Smart watches and health trackers are getting more sophisticated every year. All of these lead to data points that will allow for increasingly nuanced and sophisticated learning analytic data sets. Are there problems and concerns with all of this? Absolutely. Yet this is also an indication of what is possible with AI and learning analytics. These systems will eventually capture psychological nuances and social cues that a single teacher cannot possibly notice and take into account when teaching a group of 50 in a traditional classroom. 

This doesn’t mean that people will become obsolete in learning of the future. However, it does challenge us to reconsider roles. In a recent interview with Stephen Downes, he noted that many of these musings about AI are much further in the future than some suspect, and that the most promising present possibilities reside with open networks of learning, new technology-enhanced connections and communities among people. It is amid these connections and among these communities that we can see some of the greatest strides in the more immediate future. We can see support for his position in the rapid adoption of social media, open learning communities, and other such networking over the past couple decades. Perhaps it is within these connections and communities that we can make greater sense of the roles best played by people and the extent to which AI and adaptive learning might be integrated. A blend of these two is what we already see emerging, and I see no evidence of that slowing. What happens in the long-term is yet to be seen. 

The Future of Learning

I have no doubt that AI and adaptive learning will bring about some of the more significant changes to how people learn over the upcoming decades. Tbe algorithms will develop and evolve to draw upon increasingly complex data sets. Incredibly consequential errors will be made along the way. Tension will grow and persist regarding the role of AI alongside the role of human interaction in learning. This is the future of learning in the rest of the 21st century.

5 Strategies for a Balanced Approach to Big Data in Education

We are in the decade a big data. During this second decade in the 21st century, many are grappling with the challenges and opportunity of massive data and the emergence of tools to mine and analyze these data. Within education, this is not new. It started long before No Child Left Behind, with the 20th century growth of modern educational psychology and measurement movements. From that era we saw IQ and aptitude testing, standardized and multiple choice tests, the Bell Curve, and countless efforts in quantifying almost anything about students: achievement, retention, reading proficiency, performance by demographic data, etc. While some of these ideas have a much longer history (China used proficiency exams for civil service already in 2200 B.C.), these certainly gained a new level of attention and importance over the last 150-175 years. Consider how things have changed, as explained by David McAruthur in his 1983 report, Educational Testing and Measurement: A Brief History.

In the mid-1800s, Horace Mann launched the use of written exams in the United States. Based on that, promotion to the next grade was based on performance on these exams. Prior to that, it was oral exams and personal recommendation of the teacher. Testing was not central aspects of American education before this.

Already by the end of the 19th century, because these tests and the perceived negative impact by some, we saw the birth of a new concept, “teaching to the test.” In places like Chicago, there was even a ban on using tests for grade promotion, arguing that the teacher’s recommendation was the better option. The concern was that we would lose much of the “magic” in teaching and learning environments if we used a reductionist approach like just focusing upon students performing well on the tests. Nonetheless, even today there is an entire industry around test preparation and equipping people to perform as best as they can on tests ranging from the SAT to the GRE, LSAT and MCAT.

At this point in history, with more teaching and learning happening partly or fully through technology-enhanced means, we have even more student data to track and analyze. Every action on a device can be captured and reviewed. Similarly, external agencies are requiring the tracking of data about students: data ranging from demographics to attendance, vaccination records, and academic progress.

The advocates for big data point to many affordances. We can identify people at risk before it is too late, sometimes even proactively. We can use data to drive improvements in one or more eras. We can use data to more quickly identify and address problems. We can use data sets to personalize learning, conduct research on best and promising practices, measure progress, and to prevent students from slipping between the cracks (any number of cracks: socially, academically…).

Critics bring plenty of concerns to the conversation as well. Large data sets might inform policy, but while those policies help many, there are always losers with some policies as well. For example, perhaps predictive analytics allow learning organizations to track who is likely to succeed in an upper level math course. As such, they use this to track students on pathways that are more likely to work out for the students. That might exclude a student who is passionate about a STEM field and is willing to work hard enough to overcome the risks and alters that discouraged such a path. Then there are concerns about data privacy, misinterpretation of data, and losing sight of the people…the faces behind the numbers. Empathy and personal connection can be easily disregarded as important part of informing policy. Numbers matter, but so do the people represented in those numbers. There is an important difference between knowing that 80% of a given population is performing below grade level on reading and knowing the stories, challenges, and lived experiences of the people in that 80%.

How do we pursue the benefits of big data while also avoiding some of the limitations or negative elements? There is no easy answer to such a question, but I offer the following ten suggestions.

  1. Persistently challenge the assumption that quantitative data are more important. Get adept at arguing for the benefits of qualitative and quantitative measures. There are plenty of stories and examples from we can pull to make our point.
  2. Learn about the stories of big data success and invest just as much time in learning about big data disasters. Specific cases and examples can help important practice. Push for much higher levels of big data fluency. If we are going to be increasingly data-driven, then we need people who have higher levels of quantitative fluency. Without that, we either relegate important thought and work to a new quantitative tehcnocracy or we risk making flawed, even dangerous, conclusions by misreading the data. Anyone arguing for increased use of data must also be ready to put in the hard work of becoming more literate and fluent.
  3. Beware of the drive to value that which is easier to measure. This starts by persistently bringing the group back to mission, vision, values and goals. If we do not do this, it is easy enough for missions and goals to change just because some goals are more neatly and easily measured than others. Big data is not just about numbers. You can have big quantitative and qualitative data. Be a firm voice in starting with mission. We want to be mission-driven, data-informed, not the other way around.
  4. Consider an equal treatment approach to data usage. If teachers insist on using big data to analyze students, then shouldn’t big data be used to inform policies for teachers as well? What about the same thing for administrators and board members? While this will never be perfect, pushing for an equal treatment approach is likely to nurture empathy and more balanced consideration by decision-makers. For example, consider how many educators insist on the value of frequent tests, quizzes and grading practices that they would vehemently oppose if the same practices were applied to them. Take this an apply it to the state agencies, federal agencies, and politicians as well. Any politicians committed to arguing for big data on a state or federal level in education should be just as open and welcoming to the use of an careful data-driven analysis of their success, record and behaviors in office.
  5. Champion for the most highest possible ethical standards when it comes to data. Sometimes it is so tempting to use data, even for noble purposes, but we have to pass for security reasons or to protect various parties. We must hold the highest possible standard in this regard, even when personal loss is involved.

Big data in education will continue to have affordances and limitations, but these five strategies are at least a good start in promoting a more balanced approach.