Measuring the Impact of Educational Products & Services

As educational leaders strive to address challenges and embrace opportunities in contemporary education, they more often than not turn to products and services from education companies: textbook companies and curriculum providers, software and hardware products, testing services, providers of external standards, packaged educational programs, trademarked models and frameworks, etc.

How do leaders make decisions about all these things? Here are some of the ways that people do it, although there are many others, and often a combination of approaches impact the decision.

1. Sift the products and services through a home-grown strainer, rooted in certain local realities and core values; things like low-cost, easy to implement, easy to understand, resonates with common sense or a sense of intuition about what is best, etc.

2. Trust the judgement of one or more experts. Leverage a consultant. Tap into the “experts” within the organization. Either way, this approach relies on an expert or an expert group to make the choice.

3. Get feedback from one or more stakeholders. Go with what people want, like or think would work well.

4. Follow your gut…or heart.

5. Go with the “trusted brand.” Some like and trust Apple or Google. Some feel the same about other brands.

6. Good recommendations and references. Call around. Talk to others who used the product and service. How did they like it? How did it work for them?

7. Search for case studies and other research on the efficacy of the product or service.

8. Follow the trends. Go with what seems to be popular and promising.

I’ve seen each of these be the driving force in decisions about educational products and services. Sometimes the process is careful and systematic. Other times it seems quick and spontaneous. And I’ve seen them all in massive learning organizations with large budgets as well as the smallest schools running on a shoestring. Much of it seems to come down to the beliefs and convictions of the educational leaders.

At the end of 2013, Nesta (“an innovation charity with a mission to help people and organizations bring great ideas to life.”) published a proposed set of standards for evaluating products and services. Or rather, these could be standards that companies selling innovations and interventions could be expected to use to evaluate their own products and services; providing the consumer with useful data to evaluate as part of the decision-making process. They set up a five level standard for evaluating educational innovations or interventions.

Level 1 – “You can describe what you do and why it matters, logically, coherently and convincingly.” – Note that this does not have robust data to support it, but they at least have a clear explanation of the intervention and some potential benefits that it offers.

Level 2 – “You capture data that shows positive change, but you cannot confirm you caused this.” This one takes the step of collection and analyzing data in contexts where the intervention is used, but the context lacks the controls to claim, with certainty, that the intervention was the sole cause of the positive outcomes.

Level 3 – “You can demonstrate causality using a control or comparison group.” This adds more confidence that the intervention is indeed the cause of the positive outcome.

Level 4 – “You have one + independent replication evaluations that confirms these conclusions.” This adds a level confidence in the results by adding one or more neutral parties that replicated the studies and confirmed initial findings.

Level 5 – “You have manuals, systems and procedures to ensure consistent replication and positive impact.” This assumes the previous levels, but also adds guides to help ensure that consumers use the intervention in a way that it is likely to have similar positive results.

Of course, those of us in education know that contexts is an amazingly important part of interventions, and one that causes many troubles for educational researchers hoping to have generalizable findings that hold among across diverse contexts and over and extended period of time. If learner demographics were static, things might be easier, but this is not the case. However, the solution offered in this approach is to standardize practice with the intervention. Remove as many potential factors that might jeopardize the study.

I see some use for a model like this, providing some guidance in the decision-making process with products, but I would be cautious about using it as the single decision-making tool. Level five, by design, rewards interventions that are more easily systematized, standardized, and controlled. There are certainly instances in education where something like this might be desirable, but this represents only one philosophy of education. It represents a science of education that values measurable results, efficiency, and standardization. It does not, however, put out the welcome mat for alternate philosophies of education that, for example, value engagement over assessment, self-direction or direct instruction, or the intangibles of some messy learning above assembly line educational efficiencies. Of course, as with many things, these need not be such polar opposites. There is a middle ground in these dichotomies.

This set of standards is a technology that has winners and losers, as is true for all technologies. Large companies with massive budgets for research and tests are the winners. Startups with limited funding are the losers. It positions companies like Pearson to move closer to a monopoly of certain markets, and insulates them from the disruptions of smaller companies with promising products. It also potentially sets up a standard that could be embraced by government and private grant-providers, raising the bar for who and what can gain the money needed to scale. At the same time, consumers may be winners if it drives education companies to provide more meaningful impact data about their products and services and not heavy reliance on marketing and sales tactics.

What do you think? Would you value an education business ecosystem that rated products and services according to these five levels?

 

 

 

 

 

 

Posted in blog

About Bernard Bull

Dr. Bernard Bull is an author, host of the MoonshotEdu Show, professor of education, AVP of Academics, and Chief Innovation officer. Some of his books include Missional Moonshots: Insights and Inspiration for Educational Innovation, What Really Matters: Ten Critical Issues in Contemporary Education, The Pedagogy of Faith (editor), and Adventures in Self-Directed Learning. He is passionate about futures in education, educational innovation, alternative education, and nurturing agency and curiosity.

One thought on “Measuring the Impact of Educational Products & Services

  1. Gail Holzer

    Judging from the number of times administrators are asked about the texts, materials, and technology applications they are using from other administrators who are ‘in the market’, this would be a valuable tool to provide a systematic approach in choosing products appropriate for their school.

Comments are closed.