College is a great place to start a business. Think about it. It is a vibrant community of diverse people who want to change the world. They gather in places called classrooms to talk about amazing ideas, explore some of the worlds biggest problems, experiment, and to develop some of the mental tools needed to address needs in the world. Add to that a group of scholars who are there to mentor and guide you, great spaces to network and collaborate, and countless experiences to spark your creative side.

While many people go to college with the goal of getting a job upon graduation, that is starting to change. Now some go to college to create a job, and it doesn’t have to wait until graduation. In fact, I expect to see this trend expand. I see a possible future of higher education where college isn’t just the place you go to get an education, but also a place where you go to create, innovate, launch a business, or start a social movement. While this already happens for some, a growing number of Universities have caught such a vision. We see student groups, University/business partnerships, college-initiated efforts, and even some cross-University partnerships. Below are examples of such initiatives. You will see examples from regional state schools, flagship state Universities, élite Universities, and even liberal arts colleges.

In this recent Washington Post article, the journalist tells the story of Clay Shirky’s decision to ban devices from his NYU class. Shirky explains that he previously left the choice up to students. He considered it a challenge to be more interesting than the devices, and thought it appropriate to leave the responsibility of managing the devices to the students. The following quote introduces part of his reason for the change.

Despite these rationales, the practical effects of my decision to allow technology use in class grew worse over time. The level of distraction in my classes seemed to grow, even though it was the same professor and largely the same set of topics, taught to a group of students selected using roughly the same criteria every year. The change seemed to correlate more with the rising ubiquity and utility of the devices themselves, rather than any change in me, the students, or the rest of the classroom encounter.

The article continues by pointing to research about the negative effects of multi-tasking and Shirky’s conclusion that there was more at play than a simple student choice. Instead, he argued that the presence of the devices forced students to struggle between paying attention in class and an “involuntary and emotional reaction.” He goes on to explain that he sees “teaching as a shared struggle…working together to help defend their precious focus against outside distractions.” 

Reading the article, there is plenty with which I can wholeheartedly agree. The growing research about multi-tasking, for example, is increasingly convincing. Multi-tasking has many downsides, including decreasing attention and focus, something that could certainly decrease learning as students grapple with new and complex ideas. More broadly, I welcome the wonderfully critical and reflective thinking about the nature and impact of technology in our lives. This is an important part of learning to thrive and survive in an increasingly technological age. Without taking time to consider the affordances and limitations, we easily succumb to mindless acceptance.

Yet, these very areas of agreement also lead me to think about alternatives to Shirky’s decision. If he held to an educational philosophy like educational essentialism, I would get it. He is the parent/teacher and his students/children need him to set most of the rules in lieu of their underdeveloped frontal lobes. Daddy knows best. Yet, in the article, Shirky points out that he does not hold to such a parent/child philosophy, but instead sees teaching as more of a shared experience, a “shared struggle”, an adventure (at least in part) in co-learning. Given such a philosophy, I wonder about alternative solutions. Consider the following five alternatives:

1. Provide students with access to some of the readings that led to his discovery/epiphany. Set aside some time in class for a robust discussion and exploration of the topic together. It may be that he did this, but it went beyond the scope of the article.

2. Set up a media journal assignment for students, where they logged their use of the devices during class. This would invite students to become more conscious about how the technology is impacting their experience in the class, also potentially leading to self-discoveries and robust c0-learning. It would also provide opportunity to compare different methods and strategies employed by students to leverage the devices in helpful ways.

3. Establish tech-less days and other days that are open. Or, based upon the planned activities for a given class, have students collectively vote on the “class rules”, learning to think through the issues together and make informed decisions.

4. Turn this into a class experiment with four groups. Group 1 uses devices when and however they want. Group 2 uses devices, but specifically focused upon learning strategies and supporting apps (note-taking, etc.). Group 3 uses devices, but they are charged to focus upon identifying strategies that best help them learn and focus, sharing their finding with others in the group (similar to group 2 but less prescribed). Group 4 refrains from any use of the devices in class. See if you can’t discover some patterns. Use the different group experiments to help students think through the issues more fully, and to discover the impact of different practices upon their learning.

5. Have students individually or collectively find existing research about the impact of devices in the classroom, using it to make informed decisions about their own usage.

Each of these five seem to align with Shirky’s stated philosophy of a teaching and learning as a “shared struggle” , and they do so by not banning the use of devices, but inviting students into the exploration and decision-making process. After all, these students will not have a parent to set these rules throughout their lives. Why not use this as a chance to help them learn to self-regulate? I understand that one argument against these might be that such exercises will detract from the main content and purpose of the class, but I contend that it can be done without having any effect on the overall student learning. In fact, by adding these elements of thinking about thinking and learning about learning, he is likely helping them think even more deeply about the course content.

Shirky is a brilliant thinker about new media. I value and learned a great deal from two of his books (Cognitive Surplus, Here Comes Everybody: The Power of Organizing Without Organizations). In fact, it would seem to me that his work around collaboration in the digital age might provide alternate insights into how to address the identified problem of attention-distracting practices in the classroom, insights like one or more of the ideas listed above. However, I do not write this as a criticism of the decision in his class. I don’t have all the facts nor do I fully understand the context in which he made this decision. Nonetheless, the article provides me with an opportunity to reflect on the broader conversation about the role of devices in the classroom, considering some of the options available to educators. In doing so, I am compelled to frame the discussion around a question that is larger than how to get students more fully attentive in a single class. Instead, I am asking, “How can we best help prepare young people to thrive as discerning consumers of devices in a digital age?”

According to Louis Soares in Post-Traditional Learners and the Transformation of Post-Secondary Education, only 15% of those pursuing a college degree today are seeking a traditional residential college experience. The other 85% are what some refer to as post-traditional. They are working people who want a post-secondary credential but they don’t have one. As Soares points out in the article, this is also the population of college student that is less likely to complete. Life challenges, family circumstances, the demands of work and other factors combine to create barriers to achieving a college degree for some people. It is this 85% that most benefits from the many higher education innovations over the past fifty years: night school, one-day-a-way programs, weekend cohorts, blended learning programs, low residency programs, competency-based programs, as well as the many online programs available today.

It is not that this population is unwilling to work hard. It is that the traditional full-time college structure does not align with their needs, nor is it flexible enough to allow them to meet the other significant demands in their lives. However, the “non-traditional programs” make college a possibility: allowing students to to work at their own pace, to study and learn during evenings and weekends, or to do the bulk of their work after 10:00 PM each night (once all the kids are in bed).

Even with such flexibility, there is a greater chance (than with many traditional undergraduates) that these post-traditional students will not persist and graduate. That is why many top programs targeted at this population invest extra time and effort in retention plans: adding teams of success coaches, building advanced learning analytic tools that trigger automated alerts to advisors and faculty when a student is “at-risk” of dropping out (as indicated by the student having one or more at-risk behaviors: not logging into the online course often, missing due dates for assignments, not clicking on or viewing important documents or sections of the course, getting one or more failing grades, etc.).

Along with this, the design of programs, courses, and learning experiences play an important role. These learners often have rich and diverse life experiences that they bring to the classes, they seek knowledge and skill that they can readily apply to life and work, and they benefit from the confidence-building and input that comes from frequent feedback on their work. Of course, this is not specific to post-traditional learners. Most of us value and benefit from such features.

What else might be distinct about educating this post-traditional population? If they are already in the workforce, possibly full-time, part-time, or underemployed, they are in a place to leverage their new knowledge and skill right away. They might use it to solve problems on the job, improve the quality of their work on the job, to gain the knowledge and skill necessary for a promotion, or to get what it takes to be eligible for a similar or altogether different position (if they can show what they’ve learned). This is where I see a potential affordance to the use of competency-based digital badges, progressive credentials, within degree programs for post-traditional learners. As a learner demonstrates new knowledge or a new skill, a micro-credential is issued to the students. The students have not even finished an entire course or program, yet they have earned a credential, one that they can push to their backpack and display online. In other words, they can benefit from progressive credentialing (a concept referenced in the Soares article mention at the beginning of this article). These micro-credentials can build up to progressively larger credentials (unit level, course level, and finally program level). Courses and programs are already divided in such ways at times, but adding credentialing to each new knowledge or skill acquisition provides a visible sign of progress, adds new and identifiable credentials to a learners resume, and potentially helps the learner recognize that coursework is not just about getting passing grades or jumping through hoops. It is about developing real, valuable and documented competencies.

The goal is typically for each student to progress through and entire program and walk away with both new knowledge and a valued credential called a diploma. However, with the use of progressive micro-credentials, even when life’s challenges leads one to set aside the goal of earning a degree, the learner does not walk away entirely empty-handed. That learner still has micro-credentials to show for the time and effort, not to mention the actual knowledge and skill that can be demonstrated right away. Such an approach adds a new dimension to the questions about the value of a partially completed degree, which many students may see has having nothing to show for their efforts.

This does not mean that all employers will trust or assign value to these micro-credentials. That is larger issue of trust networks. However, this approach has potential benefits even in the absence of such trust networks or perceived valued of badges by employers. By structuring the programs around such competency-based badges, we are also designing the learning experience in a way that makes it easier for learners to recognize and be able to represent the discrete knowledge and skill acquired along the way. Even without displaying badges, such knowledge and skill can be explicitly listed on a resume. This approach might make it easier for the learners to verbally communicate what has been learned an what evidence they have to provide evidence of that learning. From this perspective, progressive competency-based credentials are giving the learners the vocabulary to represent themselves well to current or prospective employers. 

This is not a claim that progressive competencies will solve the issue of dropping out of college, but such an approach does seem to provide some help, especially given the distinct needs and life circumstances of the post-traditional learner. What do you think?

While serving on a series of panel discussions about micro-credentials for a number of Australian Universities, the topic of trust networks was brought up several times by Sheryl Grant, Director of Social Networking for HASTAC and author of the recently published What Counts as Learning. In her text, Grant makes frequent reference to the importance of building a trust network as part of a badge design (p. 8, 10, 17, 18, 29). The panel discussion, Grant’s comments in the text, as well as other excellent resources like Carla Casilli’s essay on Mozilla Open Badges: Building Trust Networks, Creating Value prompted me to spend more thought and time on the subject. As a result, following is one of what is likely to be a series of posts about trust and credentials.

A friend recently told me about her son coming home with a school progress report full of A’s…and then one F in math. The parent was horrified. “What did you do wrong?” It turns out that the child did nothing. It was an error from the gradebook software. Another friend was listening and quickly shared a similar experience. Why do people have such reactions? It is because they want their children to succeed in school and the letters on the report card signify that they may not be doing well. Of course, a traditional report card or progress report with nothing more than letter grades does not tell us much. Yet, parents generally accept that an F is bad and an A is excellent. What they don’t realize is that there is no standard meaning fro an A or F across schools in the United States, and that there are dozens of factors that might shape the grade of a student (participation, timeliness of submissions, performance of quizzes, etc.). Quite often, the criteria for earning an A, B, or C are built in such a way that the letter grade is not necessarily a straightforward sign of how the student is doing in math, science, or English. It also stands for how well the student is complying with the specific rules, expectations and standards of a given teacher. As such, the letter had largely shared meaning in the public while the actual meaning can be quite varied.

None of this matters to most parents (or students and teachers, for that matter). The letter grade is a trusted symbol. Family members from around the country may gather and talk about the grades of their kids in school. It usually doesn’t matter that an A in one school and class does not mean the same as an A at another school and class. An A is an A. This is because people generally trust and accept the system. They also trust and accept the value of documents like progress reports and report cards.

This trust system builds from there. Progress reports build up to report cards. Report card data is transferred to official school transcripts. Transcripts are reviewed to issue diplomas. Diplomas at one level of schooling very often become prerequisites for entry into the next level of schooling. Finally, one or more of these diplomas become required credentials for entry into the workforce. There are jobs that only accept applicants with a high school diploma or higher, a bachelor’s or higher, etc. People trust that these credentials verify some level of knowledge and/or skill that is desired for a specific job. Does everyone with a high school diploma have a similar knowledge or skill set? Regardless of the answer, most of society accepts it as having value. It is a trusted credential, and it serves as a way to narrow down the applicant pool with little thought or effort from the employer. It is not, however, a guarantee that one will get or keep the job. The diploma gets them in the door to the interview, but at some point, they must demonstrate an ability to do the job at a standard that is satisfying to the employer. This illustrates the trust network built around common credentials like high school and college diplomas.

No Universal Trust Network around Diplomas

This trust is not uniform, even amid the generally strong trust network in the United States around high school diplomas and college degrees. There are jobs that one is unlikely to get without a credential from a certain caliber of college. Unless one has a diploma from an elite higher education institution, regardless of one’s real performance at a lesser known school, some employers will rarely seriously consider such an application. Lauren Rivera’s research on Ivies, Extracurriculars, and Exclusion: Elite Employers’ Use of Educational Credentials indicates as much. Similarly, religious organizations sometimes give precedence to graduates of schools with a similar religious affiliation, noting that a credential from such an establishment is a sign of potential mission fit and aligning with the institutional core values. While this is changing, a 2006 article in the New York Times referenced several surveys indicating that some employers attribute more value (and trust) to diplomas earned from face-to-face compared to online schools. In other words, there are multiple trust networks around diplomas, each of which have different standards.

Trust Networks Around Credentials in Professions

We also have some professions where entry includes both a specific degree from a school within the trust network along with a license or some sort of other credential. The health care industry is a prime example. Medical doctors, dentists, occupational and physical therapists, and others similar professions require not only a diploma from programs that have a special accreditation. There is often an extra exam and/or other application process to become licensed to practice. And while this varies from one medical profession to another, there are requirements to keep up one’s license. In other words, unlike a college diploma, there is a renewal process for maintaining the license or similar credential. These have expiration dates and, without renewal, regardless of the letters behind one’s name, the license is the ultimate credential necessary to practice in many health care professions.

Healthcare is a useful example of credentials and trust networks because of the high regard placed upon the credentials from multiple stakeholders. Doctors and other medical professionals value them and routinely display their multiple credentials and endorsements on their office walls. Patients and other office employees reverently refer to those professionals with terminal degrees as doctor. And these credentials hold high status in almost all of society. In other words, there is a rather strong and expansive trust network around the dual credential of a medical degree and a medical license (which has somewhat varying requirements by state).

Continuing Education and Professional Licensure

There are extensive requirements for earning the initial credential in health care professions. Yet, to maintain the license, the standards are far more modest (As an example, see this list of requirements for jobs that have requirements for license renewal in the state of Wisconsin.). In fact, most that I reviewed use an old continuing education unit as part of the requirement. As I review these continuing education requirements, I learned that many of the states provide a renewed license upon receipt of a fee and some evidence of completed continuing education units. What is interesting is that the units are not usually earned by demonstrating the maintenance of one’s knowledge and skill, or by demonstrating the acquisition of new knowledge and skill. Instead, many (but not all) of them are earned and documented by the number of hours assigned to a continuing education activity that is approved by one or more entities with the power to certify CE provider training. Depending upon the medical profession, one might get CEs for anything from self-verifying completion of a learning activity, attending a conference or sitting through a training event, attending webinars, or going through an online or face-to-face training and completing a requires quiz or assessment. Regardless, in all the examples that I’ve seen so far, the level of rigor related to renewing a credential in many of these fields of minimal, the authentication and verifying processes have limited security checks, and there is a significant trust factor built into the renewal process. This matters very little because there is such a strong trust network built around the initial credentials, so there seems to be little pressure (although I am not fully informed about the trends and developments in health care continuing education) to raise the standards for credential renewal in a way that more rigorously ensures ongoing competence.

Competency-based Micro-Credentials and Digital Badges

Contrast the examples above with the emerging development of micro-credentials and digital badges. As I’ve illustrated elsewhere, leveraging competency-based micro-credentials provides a means of verifying initial or ongoing competence with detail. When it comes to high expectations for competence in a given domain or profession, a competency-based approach that leverages more granular credentials hardly requires a defense, not when compared to credential renewal processes that are often self-reported or measured by clock hours instead of evidence of learning. In addition, as the security and verification processes continue to be enhanced, competency-based badges serve as a robust way to verify continuing education while bypassing less reliable approaches.

However, there is a wignificant limitation. Despite these seeming advantages to leveraging micro-credentials and digital badges, they have yet to develop widespread trust networks. Where diplomas have significant trust networks even in instances where trust may not be warranted, these emerging credentials have very little trust. As such, each new badge provider must build a trust network for the badge to have any perceived value. Given this present reality, the most likely way in which micro-credentials will gain increased acceptance as a valued competency-based credential is through four primary means: profession-specific trust networks, trust networks that rely upon the brand and credibility of a specific badge provider, trust networks that rely upon the certification of certain badge issuers, and/or trust networks that rely upon the shared credibility of a badge issuer and one or more employers.

Profession-Specific Trust Networks

In the instance of different health care industries, there could indeed be rapid and widespread trust networks built around competency-based badges for continuing education. They are unlikely to replace the existing initial credentials, but especially in health care professions that have communities tied to one main professional organization, there is potential for these credential to gain acceptance in a reasonable amount of time. With that said, it is problematic that licensure for many such professions is on a state level in the United States, with each state having different standards. In such instances, a national effort would be necessary, one that manages to gain the adoption and support from at least a collection of initial states. Another option would be to promote the adoption in a country that maintains licensure with a centralized or national entity. This is no small cultural shift within a profession, but there can be strong arguments made for what such a model could do for:

  • increasing public trust in professions where trust is wavering or mixed,
  • helping professions catch up with current best practices in professional development,
  • streamlining the verification of continuing education units,
  • improving patient outcomes through verification of currency in the scientific literacy of a profession, and
  • providing credentials that could serve as marketing tools and differentiators for health-care professionals.

Trust Networks That Reply Upon the Brand and Credibility of a Badge provider

Another option for the establishment of more expansive trust networks around these emerging competency-based micro-credentials is through a respected and trusted organization as a central provider of competency-based badges. This appears to be the plan of Digital Promise, with their implementation of competency-based badges for teacher professional development. Of course, if such a trust network develops, there is concern that it would be at the detriment of other professional development providers in the discipline (including Universities), moving toward monopolistic tendencies. Only time will tell whether such concerns will take on a reality. However, it seems relevant that the presence of previous credentials did not lead to such a monopoly. Yet, one or a few well-respected providers of education through competency-based badges could indeed help expand public profession-specific comfort and trust around such credentials. This could happen, for example, in the field of education around popular educator development programs from Apple, Google, or Discovery education. In essence, the trust and respect of the organization would be transferred to the new credential.

Trust Networks That Reply upon Certification of Badge Issuers

In some ways, this option is a derivation of the previous one. Instead of the trust network being established around the brand of the badge-provider, it would be possible for it to be built upon the trust of central authorizers of badge providers. This might be a state or national government agency, a professional organization, or even a well-respected central corporate partner within a given domain or profession. This allows for more diversified training providers, but leverages the respect of one of these existing entities to communicate that the credential is valuable and trustworthy.

Trust Networks that Rely Upon the Shared Credibility of a Badge Issuer and One or More Employers

This is the model that is being employed by the partnership between Udacity and Salesforce around nano-degrees. One gets the project and competency-based training through Udacity, but it was built in close partnership with a specific (or several) corporate partner, with the explicit goal of preparing people for potential jobs with that employer or similar employers. This is among the fastest ways to build a trust network around an alternate credential, but there are still questions about the transferability of that credential between the single or few corporate partners. So, while it may be among the fastest to build, the extent to which the trust network around the credential can expand remains uncertain. Yet, if the specific corporate partner has adequate respect in an industry, perhaps the trust and credential could be more easily transferable than one might initially expect.

Concluding Thoughts and What About The Criteria?

What about criteria? If you’ve followed my work around badges, I’ve often argued that the trust and credibility of a badge can be built directly into the meta-data. that a person can look at a micro-credential and quickly discover who issued it, what criteria needed to be met to earn the credential, and possibly even see the evidence/artifact/work provided to earn the badge. Isn’t that enough to build trust? While that is my ideal, I’m increasingly convinced that it is not a likely reality, not in the realm of competency-based digital badges. For better or worse, credentials are used as short-hand for competence. We live in a world of brands and trust networks. People do not necessarily place their trust in that which is objectively most trustworthy. As such, badges will need to compete according to many of the existing social norms associated with credentials. Along the way, I still see much hope in progressing toward growing understanding of and value for competency-based assessment and credentialing, but that is unlikely to be the reason that micro-credentials will gain increased trust. Rather, I see more immediate hope and possibility in leveraging the existing social trust within professions or distinct fields.

As always, what I write in this blog represents my developing thoughts amid my reading and research. As such, I especially welcome thoughts, additions, challenges, and questions in the comment area.

As learning organizations venture further into the use learning analytics and data-driven decision-making, I find it increasingly important to consider the danger of simply collecting and analyzing the data that are available or easiest to collect. I will use the example of course evaluations in schools to illustrate my point, largely based on the insights from An Evaluation of Course Evaluation Evaluations by Stark and Feishtat (which I learned about and located because of this article in the Chronicle of Higher Education). Amid their critique of evaluations, they share the following story.

Three statisticians go hunting. They spot a
deer. The first statistician shoots; the shot passes a yard to the left of the deer. The
second shoots; the shot passes a yard to the right of the deer. The third one yells, “We
got it!” (Stark, P, and Freishtat, R., p. 4)

As indicated by this story, using averages in data may lead to flawed conclusions. At some point, there is need to put faces and stories to the data, which calls for more forms of data collection. The problem is that not all data are equally easy to collect. So, we often settle for pre-developed templates, what our analytics software can most easily collect and display, or what we (individually or collectively) can most easily understand. We may establish key performance indicators and identify measures based on what data is available or easiest to collect, analyze and understand. In doing so, we make flawed conclusions about how we are doing as an institution. Our numbers look good, so we are making progress. Or, our numbers are down so we must do what we can to raise them.

Note the potential flaw with that last statement. If our numbers are down, we must do something to raise them. When we hear something like this, we have signs of a subtle but important shift in an organization. There may be hundreds of ways to increase the numbers so that we seem to be making progress. Yet, not these options are equally valuable. Consider a course evaluation where an instructor’s overall course evaluations go down one semester. The only obvious change that the instructor can identify from the last term (where rating were much higher) was that she added the requirement of a weekly learning journal. So, she got rid of the learning journal assignment the next term and the evaluation averages went back up. Problem solved. Look more closely and find that student performance had actually increased during that term with the lower average evaluation. So, the ratings are now higher but students are not performing as well on the assessments. The teacher sticks with that strategy, knowing that rank and promotion is partly dependent on course evaluation averages.

Most course evaluations are based upon self-reporting, because that is easy to do. In the scenario from the last paragraph, note that discovering this potential problem would only happen if we collected actual student performance data along with their evaluations. Yet, I am not aware of organizations that do that. It is a more complex task to carry out. So, we settle for the easy route, despite the fact that it may lead us down the wrong path.

Please know that I am not arguing against the benefit of quantitative data in learning organization. These data sets can indeed open our eyes to important patterns, trends, and relationships. They are quite valuable. Instead, I’m suggesting that we want to put careful thought and planning into what data we collect and how we collect them, that we do the hard work of identifying measures that will give us the most complete and accurate picture. We want the complete (or as complete as possible) story. We want to see human faces in the data. This will help us use the data to make decisions that will truly support our organizational mission, vision, values and goals.

Self-reporting data in course evaluations has any number of limitations, as pointed out by Stark, P, and Freishtat. The ratings do not mean the same to all students. What one student considers “excellent” may only be “very good” to another student. What one student considers “very challenging” may be “not very challenging” to another. Given this reality, what do the averages tell us?

As Stark and Freishtat explain,

To a great extent, this is what we do with student evaluations of teaching effectiveness.
We do not measure teaching effectiveness. We measure what students say, and pretend
it’s the same thing. We dress up the responses by taking averages to one or two decimal
places, and call it a day (p. 6).

In the end, I must confess that I was favorable to Stark and Freishtat’s work because it affirms my own values and convictions. They conclude that a better way of evaluating teacher effectiveness is one that includes observations, narrative feedback, the inclusion of artifacts as evidence of teacher effectiveness, along with insights gleaned from course evaluations (p. 11). This sort of triangulation tells a story. It puts a face on the data. It provides context and something from which a teacher can more readily learn. The problem is that this takes more time and effort. Yet, if we truly want to create key performance indicators for our learning organizations, and we genuinely want to know how we are doing with regard to those indicators, then it requires this type of work. And from another perspective, what example do learning organizations set for students if the people in that organization set up an entire system of measurement based upon cutting corners and doing what is easy and available?

Overwhelmedrigour | rigor / rig·or
I. Rigidity of action, interpretation, etc.
* Severity, harshness, and related senses.
1.
a. Harsh inflexibility (in dealing with a person or group of people); severity, sternness; cruelty.
b. An act or instance of harsh inflexibility, severity, or cruelty; a severe or injurious action or proceeding.
2. Hardness of heart; obduracy.
3.
a. Hostility, harshness, or severity (of weather, climate, etc.); extremity of cold; (also) hardship or suffering caused by this.
b. Great hardship or distress.
c. In pl. The requirements, demands, or challenges of a task, activity, etc.

-Oxford English Dictionary Online

Wander the halls of academia…at least the faculty halls, and you may well come across the phrase “academic rigor.” It is used to talk about the importance of maintaining high academic standards, and as a defense against changes or practices that a faculty member does not perceive as accomplishing this goal. If we look in the dictionary, we see a different but potentially enlightening series of definitions. Rigor is about being harsh, inflexible, severe, cruel, stern, hostile, and producing hardship or distress. Out of all the phrases we could use to talk about high academic standards, I find it intriguing that we use “academic rigor.” Nonetheless, one way or another, it has become the phrase of choice in many secondary and higher education institutions.

Interestingly, when I listen to some statements about how to maintain academic rigor in classrooms and schools, many of them strike me as aligning more closely to the definitions provided in the OED. In that spirit, following are ten practices that I’ve heard affirmed as ways to maintain academic rigor. As you read through them, do they seem to be more about high academic standards or being harsh, inflexible, severe, cruel, stern, hostile, or producing hardship and distress? You decide. As you read through this list, consider whether these practices are the most effective ways to get as many students as possible to perform at the highest possible level, which I tended is a more humane and ultimately beneficial definition for academic rigor. Also consider whether each of these are really about high academic standards instead of shaping courses around an educator’s comfort, preferences, time, and pre-existing beliefs about education.

1. Celebrate Bell Curves and Tests or Assignments When Not All Students Excel

This is often seen as a good sign. It shows that the test is challenging. Yet, I can easily write a test about simple facts that still achieves such goals. What about an alternative of not settling for failing grades?

2. Provide Less Help to Students

The further up the academic ladder, the more this seems to emerge as a positive practice. Plenty reject it, but I still see teachers who equate helping students as codling them. We want students to become self-directed and independent, but that doesn’t have to happen with a survival of the fittest mentality. What if we instead define academic rigor and working hard to get as many students as possible as fit as possible? As a advocate for self-directed learning, I see plenty of value in stepping back and giving students room to try things on their own, and if this is a fundamental part of the school philosophy (like at a democratic school), I support it. Yet, this is not the spirit of most schooling settings. We can both empower students to be self-directed learnings and be available to help when they ask for it.

3. Be Less Personal and More Professional

Don’t let them see your human side. Don’t try to build rapport or show interest in their lives. That is left for friends and family. Yet, what does that actually have to do with academic rigor? Is seriousness more rigorous than hospitality?

4. Give Feedback When You Give the Grade

In other words, all rich feedback only comes after it is too late to learn from it and improve your performance in the class. This is a sure way to make the grades or assessments in the class about student’s pre-existing knowledge and abilities, and not as much about what knowledge and skill teachers can help each student develop during the course.

5. Don’t Be Flexible

Fail them or drastically decrease grades whether they are a minute, hour, or week late. Don’t entertain student requests for alternate approaches to an assignment. Don’t adjust the lessons according to how students are progressing. Keep it strict and uniform. Yet, if we look more closely at this practice, it may well detract from every student reaching the highest possible level of performance.

6. Test on What You Don’t Cover

Give lectures, lead class activities, and assign readings  from the textbook(s). Then, when it comes time for the test,  don’t just test on the important ideas. Tell students that anything in the text(s) and class activities is fair game. What if the tests instead focused on the stated course objectives in the syllabus and the knowledge and skill most important for the students? Shouldn’t the most important concepts be weighted the highest and emphasized the most?

7. Give graded quizzes to make sure students read in advance of coming to class.

This sounds good, but it really just gives a letter grade advantage to the students who have the best reading comprehension and can learn the content without the teacher’s help or guidance in class. If one insists on quizzes, why not give them after the class, once the students have a chance to read, learn from the teacher’s instruction and mentor, and review? This seems to promise greater academic performance, and it protects the letter grade from becoming a measure of something other than student learning as a result of the course.

Give the Lecture and Content First, but Save All Questions for the End (if there is time for them).

Give the big lectures and present all the content. If there is time, leave room for Q & A at the end. The problem is that the student didn’t get a chance to ask clarifying questions as the lecture progressed. So, the confusion and misunderstandings increased as the content distribution continued.

9. Give Surprise and Trick Questions on the Exams

This is supposed to keep students “on their toes”, but how does it help students learn as much as possible?

10. Do Not Give 2nd Chances 

There are no second drafts or retakes. This ensures that students work as hard as possible the first time. It may well do that, but it also deprives students from the chance to improve their work. In other words, it prevents them from leaving the class with greater levels of knowledge and skill than if they had to keep working, rewriting, and re-taking until they reached a higher standard. Would you rather have future doctors who passed medical school on “B” level knowledge and skill or doctors who were required to keep working at it until they reached an “A” level of understanding and skill?

I realize that there is room for differences of opinion on these matters, but I also contend that these ten practices are not the best or most certain ways to support academic rigor…at least not the good kind.

  • How many students have failed their last two math quizzes?
  • Which students have missed three or more days of school in the last month.
  • What is our 4 or 5 year graduation rate? How about our first year retention rate?
  • Which students are most at risk for dropping out?
  • What percentage of students are first generation college students?
  • What factors most lead to student engagement and improved learning?
  • How much class time is “on task” for each student? What is the average cost to recruit a student for a given program?

Ask any question about student learning, motivation, or engagement. Then find data to help answer that question. Now what? What do you do with data? How will it inform your decisions? This is what people refer to as data-driven decision-making, and it can be wonderfully valuable. However, it can’t drive decisions, not by itself. Decisions are not data-driven. They are driven by mission, vision, values and goals. We want purpose-driven organizations, not data-driven ones.

Without clarifying one’s goals and values, the data are of little value. Or, perhaps even worse, the data lead us to function with a set of values or goals that we do not want. I’ve seen many organizations embrace the data-driven movement by purchasing new software and tools to collect and analyze data, but they did not first figure out how data will help them achieve their goals and live out our values. I’ve seen organizations that value a flat and decentralized culture be drawn into a centralized and largely authoritarian structure…because the systems were easier to use that way or it was less expensive. I’ve seen organizations that value the individual and personal touch abandon those emphases when data analysis tools were purchased. I’ve also seen organizations spend large sums on analytic software, only for it to be largely unused. These may not be all bad, but it is wise to recognize how data will influence an organization.

An important part of any organizational plan to collect, analyze and use data sets is to establish some ground rules, working principles, and key performance indicators. These should reflect the organization’s values and mission. Yet, it is easy to set up some key performance indicators over others simply because they are easier to measure, that is how another organization did it, they are values and demanded by external stakeholders, or because a small but influential core wants it. As such, data analysis can lead us away from our mission, vision, values and goals as much as it can help is achieve or remain faithful to them. The data that we see and analyze has a way of establishing institutional priorities. The data not collected or analyzed ceases to have a voice amid such outspoken data sets.

In addition to this, data analysis is not neutral. The methods and technologies associated with it are values-laden. They typically amplify values like efficiency and effectiveness. Few people will disagree that both of those have a role in learning organizations, but not at the expense of other core values. As such, I contend that, alongside key performance indicators, it is wise to establishing core value indicators when implementing a data analytics plan. What indicators let us know that our values are visible, strong, and amplified? 

In the end, behind any decision is a mission, vision, set of values, and list of goals. Not data. Start with the goals and values. Then ask how data can serve those values and goals…not lead them.