4 Ways to Better Measure Corporate Training Results

I think results come out in lots of different ways, and some of them you measure, and some of them you feel.

In the January issue of TD magazine, SAP CEO Bill McDermott makes the point that training results aren’t always numbers-driven. I’ve seen this first-hand.

An India-based colleague has spent the last several years holding monthly training sessions focused on our company values and discussing “soft” topics such as teamwork and collaboration. When I dropped by one of these training sessions last month, one of her trainees commented: “In other organizations people try to pull other people down. Our organization is unique in that everybody tries to help each other and boost each other’s performance.”

Sometimes you can feel the results of a training program. But as I mentioned in Monday’s post, companies around the world spend over $75 billion (with a b!) annually and have no idea whether or not their training efforts have produced any results. This isn’t good.

If you happen to be interested in the ability to show other people (your boss, for example) that your training efforts don’t just feel good, but have made a measurable difference, here are four ways to do that:

1. Make sure you ask what should be different as a result of the training. This one may sound like a no-brainer, but you’d be surprised at how many times training is planned and executed without specifically identifying what should be done new or differently or better as a result.

2. Pay some attention to Kirkpatrick’s Four Levels of Evaluation… About 60 years ago, Donald Kirkpatrick espoused four “levels” of evaluation to assist training practitioners begin to quantify their results. First come post-training evaluation scores (“smile sheets”), then learning (most of the time through pre/post testing), then skill transfer on the job (maybe a self-reported survey, or a survey from a trainee’s supervisor) and finally impact (did sales increase? did on-the-job safety accidents decrease?). Levels 1 and 2 are most common, but trainers and organizations can certainly strengthen their Level 3 and 4 efforts.

3. …and then go beyond Kirkpatrick. According to a research paper entitled The Science of Training and Development in Organizations, Kirkpatrick’s Four Levels is a model that can be helpful, but there is data to suggest it is not the be-all-and-end-all that training professionals have pinned their evaluation hopes on. The authors of this paper offer the following example as a specific way to customize the measurement of a training program’s success or failure:

“If, as an example, the training is related to product features of cell phones for call center representatives, the intended outcome and hence the actual measure should look different depending on whether the goal of the training is to have trainees list features by phone or have a ‘mental model’ that allows them to generate recommendations for phones given customers’ statements of what they need in a new phone. It is likely that a generic evaluation (e.g., a multiple-choice test) will not show change due to training, whereas a more precise evaluation measure, tailored to the training content, might.”

4. Continue to boost retention while collecting knowledge and performance data. Cognitive scientist Art Kohn offers a model he calls 2/2/2. This is a strategy to boost learner retention of content following a training program. Two days after a training program, send a few questions about the content to the learners (this can give data on how much they still remember days after having left your training program). Two weeks later, send a few short answer questions (again, this helps keep your content fresh in their minds and it gives you a data point on how much they’ve been able to retain). Finally, two months after the training program, ask a few questions about how your content has been applied on the job (which offers data on the training’s impact).

If companies as spending billions of dollars on training, never to know whether or not those efforts were effective, there’s a problem. Spending a few hours thinking through your evaluation strategy prior to deploying your next training program can make your efforts literally worth your time.

 

Learning and Development Thought Leader: Will Thalheimer

In this age of social media, where anyone with a computer and Internet connection can post something online and proclaim themselves as a “thought leader” in their industry, it can be difficult to find the true leaders in the industry.

This is the first in a new, periodic series from the Train Like A Champion blog that will highlight L&D professionals who have proven effective in moving the industry to better results and higher performance.

Thought Leader #1: Dr. Will Thalheimer

Will Thalheimer leads Work-Learning Research and is simply on a quest to cut through all the noise and questionable research that’s out there in order to help L&D professionals be aware of evidence-based practices and well-conducted research.

He has the gall to question the effectiveness of Kirkpatrick’s 4 levels of evaluation (and the research to back it up). And don’t even get him started on learning styles.

If you have a sliver of interest in the research behind what truly works in training and presentations, you should be reading his blog, Will At Work Learning.

Two Resources from Dr. Thalheimer that You Should Check Out ASAP:

Research Study: While there’s a lot of good stuff on there, one blog post I found particularly helpful revolved around a study titled The Science of Training & Development: What Matters in Practice. The Science of Training & Development. In my day job, I work with a lot of medical professionals who insist on the science behind things. While facilitation is indeed an art form, having research-based best practices lends necessary credibility to conversations about why lecture and didactic delivery of content isn’t effective.

Slide Design: I’ve never liked the idea of slide templates. I never had a very good argument against them until I watched this 10-minute video:

In Sum:

I could write a lot more about Dr. Thalheimer and why he’s someone you should be listening to. But then those would just be my words. And the Train Like A Champion blog hasn’t (yet) declared me a thought leader in the L&D field. So, check out some of these resources and discover for yourself why you should be paying attention to his work.

 

Start Worrying (A Lot) More About Level 1

I generally consider Level 1 evaluation forms to be a waste of time and energy, so when I read Todd Hudon’s The Lean CLO Blog post this week, Stop Worrying About Level 1, I cheered and said YES! And…

Todd’s point is right on. The most valuable learning experiences are generally uncomfortable moments and generally not even in the training room. Even in the training room, trainers can often tell by observing their audience’s behavior (not by using an evaluation form) when participants are engaged.

The best argument I can think of for Level 1 feedback is that it provides institutional memory. What happens if you – the rock star training facilitator of the organization – win the lottery and retire to your own private island in the Caribbean tomorrow? Or perhaps something more likely happens – you need to deliver the same presentation a year from now. Will you be able to remember the highlights (and the sections of your lesson that need to be changed)?

This point was brought home to me earlier this week when a co-worker was asked to facilitate a lesson someone else had presented back in the spring. I shared the lesson plan with my co-worker and his first question was: do we have any feedback on this session?

Searching through my files I realized that my disdain for Level 1 feedback led me to create a quick, too-general post-training evaluation form for this meeting and it didn’t yield any useful feedback for this particular session.

In addition to questions about the overall meeting, I should have asked specific questions (both Likert-scale style and open-ended) about each session during this meeting. Yes, this makes for a longer evaluation form, and if we’re going to ask learners to take the time to fill out the forms anyways we may as well get some useful information from them!

I absolutely agree with the idea that the best, most powerful learning experiences happen on the job. And in a world where formal training experiences are still part of our annual professional development experience, we training professionals need to ensure we continue to build better and better learning experiences for our audiences, both through noting our own observations of the session as well as crafting more effective ways of capturing our learners’ reactions.

What are some questions you’ve found particularly helpful on post-training evaluation forms?

Let me know in the comments section below (and perhaps it will be the subject of a future blog post!).

 

3 Training Lessons from Donald Kirkpatrick (Rest in Peace, Mr. Kirkpatrick)

Earlier in the week, I learned that Donald Kirkpatrick passed away.

don-kirkpatrick

Like many others in the learning and development space, he made a life changing impact on how I view my work. Here are three simple yet profound ways in which his work influenced mine:

1. Smile sheets don’t justify a giant ego (if the feedback is good) nor are they the end of the world (if they’re bad). I first landed a job with “training” in its title about 8 years ago, and the way by which I measured my work was by the end-of-training evaluation forms. I viewed them as my report card. Great evaluation scores would have me walking on top of the world for the rest of the afternoon. Poor scores were absolutely devastating.

I don’t remember where I first heard of Kirkpatrick’s 4 levels of training evaluation – perhaps on an ASTD discussion board, perhaps in T+D magazine. When I learned that post-training evaluation scores were the bottom rung of training evaluation, I felt liberated… until I realized that I had to actually be intentional on how to ensure people learned something, that they could use new skills on the job, that they could improve their performance in a measurable way. It was a completely different outlook on how to approach my work. A new and exciting challenge that wasn’t limited to the whims of post-training smile sheets.

2. Training should actually be transferred. Kirkpatrick’s 3rd level, “transfer”, had perhaps the most profound impact on my work. It’s a level that I, and I’m sure many of my colleagues (yes, even you, dear reader) continue to struggle with and do poorly. Afterall, once people leave the training room, what control do we have over whether or how they choose to apply our brilliant lessons and content on their job?

It’s the simple act of being consciously aware that transfer of learning to the job is the Holy Grail of training design and evaluation that transforms training sessions from being presenter-centered to learner-centered. And while it’s extremely difficult to measure transfer, top notch trainers will always strive to achieve efficacy at this level.

3. Bottom line: it’s a process, not an event. The first two items above naturally lead to this point: if training is about more than high evaluation scores, if training is about transfer to the job and the subsequent results that transfer can yield, then training must be a process, not a 1-hour or 1-day or 1-week event.

Aiming for the highest levels of Kirkpatrick’s evaluation model has inspired me to figure out what kinds of job aids might live beyond the training room, what kinds of communication with supervisors is necessary in an attempt to forge partnerships with those who will have more influence on whether learning is transferred on the job, and what kinds of longer term evaluation instruments need to be integrated into the training’s design.

We spend more waking hours at work than we do with our families. When we learn how to be better at work, it can improve our quality of life. These three lessons made me a better training professional, and in turn improved my quality of life.

Though he’s no longer with us physically, I believe his legacy will continue to transform training programs for generations to come. Thank you, Donald Kirkpatrick, for sharing your talents and your work.

Evaluating Training Efforts

Moments ago, I cleared security at London’s Heathrow Airport. As I was re-packing my bag with my laptop and iPad, I noticed this little machine.

Security Feedback

I tapped the super-smiley button.

I wonder what they do with the feedback.  I’m guessing they use it like many training professionals use similar feedback.  If the percent of super smiley feedback is high, they probably advertise it internally, and perhaps even externally to demonstrate a job well-done.

Level 1 Feedback Limitations

The limitations of the “smile sheet” evaluation form are many.  All they can really tell us is whether someone enjoyed their experience or not.  Low scores should  be a concern, but high scores don’t necessarily mean value was added.  This sort of quantitative feedback can’t tell us why someone might give a low score. In the example at Heathrow Airport, I could hit the grumpy face, but it doesn’t help any of their supervisory or training staff improve anything. Did I hit the grumpy face because I had a terrible interaction with the security staff?  Did I hit the grumpy face because I was pulled aside for random, extra screening? Did I hit the grumpy face because a group of other passengers claimed to be rushing to their airplane which was departing in 10 minutes and therefore they were allowed to cut the lengthy security line while the rest of us waited patiently?

The Question Matters

I know many organizations – large and small – that measure training success by post-training evaluation scores.  I understand the reason – as training professionals we want some type of metric that can demonstrate our value.  But the minute someone starts asking “tough” questions like: “what value is a 4.3 out of 5 adding to our company’s bottom line?”, the power of the smile sheet metric could quickly lose its luster.

I wonder if the Heathrow staff would get more useful data if they changed their question.  Some ideas that came to mind include:

  • How did today’s security experience compare to previous experiences?
  • Will your flight be safer because of today’s security check?
  • Were you treated with respect and dignity during today’s security screening?

The list could go on. I understand that in order to have people participate, they’re limited to one question, and it needs to be simple. But “How was your security experience today?” depends on so many variables.

When it comes to post training evaluation forms, I try to limit the number of questions I ask to three per module/topic:

  • This session will help me do my job better
  • The facilitator was an expert in this topic
  • The facilitator presented the topic in a way that kept me curious and interested

Depending on what I want to get out of the feedback, I may also ask a question about whether or not each learning objective was accomplished.  At the end of the evaluation form, I’ll also include these two questions:

  • I felt engaged and participated throughout the day
  • I felt my fellow attendees were engaged and participated throughout the day

Again, these types of “Level 1” evaluation forms are just taking a snapshot of participants’ subjective feelings of how things went, and including blank text boxes for attendees to write additional comments can add some clues as to why they gave certain scores, but ultimately the value of training initiatives should be measured by specific behavior changes or performance improvements.  Those types of measurements require additional feedback down the road – from attendees and ideally their supervisors.

Nonetheless, evaluation forms like this can begin to offer a hint of the value that training adds… if the questions are crafted well.

What questions have I missed that you’re asking your attendees? If you create elearning, would you ask anything different in the post-module evaluation?

The Train Like A Champion Blog is published Mondays, Wednesdays and Fridays.  If you think someone else might find this interesting, please pass it along.  If you don’t want to miss a single, brilliant post, be sure to click “Follow”!  And now you can find sporadic, 140-character messages from me on Twitter @flipchartguy.

Training Tips: Giving Feedback to an Aspiring Facilitator

Recently I was asked to give feedback to someone who wasn’t quite ready to represent the organization as a facilitator in front of a live audience.  To prepare for this conversation, I developed the following form to observe this person, take notes and organize my thoughts.

Facilitator Feedback Form

Click here for a pdf version of this observation form.

Feedback, especially when it’s critical of someone’s performance is tough to give.  Being prepared with specific observations can help lead to a more constructive conversation.

What would you add or change about this observation form?

The Train Like A Champion blog is published on Mondays, Wednesdays and Fridays.  These brief “Training Tip” posts are a series of quick reference tips that are published while your beloved Train Like A Champion blogger is currently enjoying a little vacation.  The more in-depth posts will resume again in August.

How Do You Know If Your Training Is Effective?

“How do you know if your training actually has an impact?”  It’s a question I hear often, especially regarding soft skills training.  It all starts with a needs assessment.  When I’ve led teams, the easiest way I’ve found to assess needs, recommend training and then measure results is through a professional development plan (PDP).  If training isn’t tied to a need, if it’s not written down and if an employee isn’t held accountable for improved performance, the impact of the training will not be fully realized.

Here is a generic version of a professional development plan based upon one that I’ve found to be quite effective.

04292013 - PDP

I like this PDP format because it illustrates how metrics and key performance indicators should be directly tied to soft skills.  Metrics and numbers tell a story, but what is that story?  Are performance numbers down because of team dynamics and dysfunction?  Then perhaps a focus on teambuilding skills would be appropriate.  Are quarterly results suffering because team members haven’t established the correct priorities?  Perhaps time management is an area that needs to be improved.

Identifying baseline performance metrics, identifying appropriate learning opportunities (training on hard or soft skills) that should impact those performance metrics, then monitoring those performance metrics and results to identify if there has been improvement in those metrics is how I can feel confident that training is effective.

I’ll write it again: if training isn’t formally tied to a need, however, its full effectiveness will not be felt.

The Train Like A Champion Blog is published Mondays, Wednesdays and Fridays.  If you think someone else might find this interesting, please pass it along.  If you don’t want to miss a single, brilliant post, be sure to click “Follow”!  And now you can find sporadic, 140-character messages from me on Twitter @flipchartguy.