Measuring the Effectiveness and Impact of an eLearning Course

343549837_aa09836f48_o.jpg

When we’re talking to a new client, we like to learn as much as we can about the training program they currently have in place. A really important part of that discussion is “How’s it working?” And even more important, “How do you know?”

That second question is a tough one to answer. At most organizations, if anyone is bothering to figure out whether a training course is actually doing anything (in terms of improving performance) they’re relying largely on anecdotal evidence.

We found some telling statistics on eLearning Measurement in a study published by The eLearning Guild. Take a look.

  • 87.5% of organizations participating in the study tracked completions. Good start, but the numbers go down from there.
  • Only 64.7% asked assessment questions to test memory recall.
  • 65.2% tracked learner satisfaction, but…
  • Only 49.1% measured whether the learner felt the training was of value.
  • At best, 28.6% tracked whether learners successfully applied training material in a real-world setting.
  • About 15% tracked successful real-world application.
  • 31.7% monitored changes in performance, while about 20% measured business impact in terms of ROI.
  • 10% tracked nothing. Nothing!

So, 80% of the organizations that participated in the study couldn’t answer the two questions I posed at the beginning of this post. Yikes...

Completions and satisfaction data is useful information and we aren’t here to question its value, but it offers a very limited view of how successful a course actually is. A course with high satisfaction ratings isn’t necessarily effective...it just means the audience liked it.

Strong assessment scores suggest audience members learned a lot from the course, but it doesn’t capture whether they are applying it on-the-job. It doesn’t tell us whether there has been sustained behavioral change as a result of the course in the weeks and months following completion.

Prove To The Higher-Ups That Training Works

We feel quite strongly about the importance of robust eLearning measurement, so we set out to build measurement tools into our eLearning software platform, ExpandShare.

We look at measuring course effectiveness as a pyramid that builds on Kirkpatrick’s Four Levels of Training Evaluation. You can see Satisfaction and Learning there on the bottom, and that we’ve added three additional areas we feel are important to determining the true success of an eLearning program.

Measuring Confidence in eLearning

Take a step beyond simply measuring Satisfaction. Again, Satisfaction tells us whether an audience enjoyed the course, but Confidence digs a little deeper. By measuring Confidence, we uncover whether the audience members feel better equipped to do their job (or complete a specific task, etc.) as a result of taking the offered eLearning.

Measuring Behavioral Changes in eLearning

We offer training because we want people to do things a certain way, right? They need to be taught the steps as well as the reason for doing things a certain way. Considering this is why we have training in the first place, it’s really interesting to look at the study mentioned above and realize how few organizations actually measure it.

Simulations and assessments are great to determine if your audience is getting the idea. These tools aren’t a surefire indicator audience members will continue to apply course knowledge in the real world. We need to continue monitoring and testing their application of the course material in the weeks and months following course completion.

Measuring Results in eLearning

Now comes the hard part. The most valuable measure organizations can track is overall impact. Is the business seeing results from training? Results are often defined differently by different organizations. It could mean growth in sales or ROI, increased unit sales, decreased equipment downtime, less employee overtime… It depends on the organization’s unique definition of success. But ultimately, if we can’t definitively say training efforts are making a difference, we run the risk of wasting resources. We’re not serving our ultimate purpose, either.