Tired of presenters who miss the mark? Show them what they’ll be evaluated on!

Last week I sat with a colleague, walking through her line-up of speakers for an upcoming conference. She asked if I had any suggestions to help the presenters deliver more effective presentations.

It’s an age-old, intractable question. Do conference speakers (or consultants who may come into your organization to train your staff on one specific topic) really care?   Continue reading

Post-training evaluation data is nice… but what should we do with it?

A while back I wrote about 8 transferable lessons from my Fitbit that I’ve applied to my L&D practice. As part of that post, I complained that the Fitbit sometimes gave me data, but I couldn’t do anything with it. Specifically, I was talking about my sleep pattern.

A typical night could look like this:

Fitbit - Sleepless

FORTY ONE TIMES RESTLESS! That’s a lot of restlessness. It’s not good. But what am I supposed to do about it? It reminded me of my post-training evaluation scores.

Sometimes learners would give my sessions an average of 4.2. And sometimes those same learners would give a colleague’s presentation an average of 4.1 or 4.3 (even though I knew in my heart of hearts that my presentation was more engaging!!). But what could I do with these post-training evaluation scores? I’ll come back to this point in a minute.

As for my restlessness, my wife suggested something and suddenly my Fitbit sleep tracker looked a lot different. Continue reading

Want to improve your organization’s training? Some people may be suspicious of your intent.

Suspicious

Readers of the Train Like a Champion blog will not be surprised that I am smitten with Will Thalheimer’s new book, Performance-focused Smile Sheets: A Radical Re-thinking of a Dangerous Art Form.

You can read a review of why every training professional should read this book here, and you can see several examples of how I integrated concepts from the book by having my own post-training evaluation forms undergo an extreme makeover here.

It just makes sense. Better post-evaluation questions lead to better analysis of the value of a training program, right? So it was with some surprise that I was pulled aside recently and asked to explain myself for all the changes I’d made to our evaluation forms. Continue reading

Extreme Make-over: Smile Sheet Edition

Eval Form Cover Image

A few weeks ago I finished reading Will Thalheimer’s book, Performance-focused Smile Sheets: A Radical Rethinking of a Dangerous Artform (here’s my brief review of the book).

A colleague recently made fun of me, suggesting that I read “geeky books” in my spare time. That would be true if I just read books about smile sheets for fun. And while I did have fun reading this book (so I guess I am kind of geeky), I’ve been attempting to integrate lessons learned from the book into my work.

Following are two examples of improvements I’ve made on existing smile sheets, and the logic behind the changes (based upon my interpretation of the book):  Continue reading

Book Review: Will Thalheimer’s Performance-focused Smile Sheets

Performance-focused Smile Sheets

102-word Summary: “The ideas in this book are freakin’ revolutionary.” So Will Thalheimer begins chapter 9 of his book. It’s hard to argue against the statement. In a world where the vast majority of training is evaluated on a 1-5 Likert-style post-training evaluation form, Will Thalheimer proposes a different way to perform a basic-level assessment of a training program. His thesis: while “smile sheets” aren’t the be all and end all of training evaluation, they’re the most common type of evaluation, so if we’re going to have our learners fill them out, we may as well get some good, useful, actionable information from them.  Continue reading

The Case for Net Promoter Score as a Measure of Presentation Effectiveness

When it comes to post-training evaluation forms, the rule of thumb to which I’ve adhered is: high scores may not guarantee learning happened, but low scores often guarantee learning didn’t happen.

For years I’ve tabulated and delivered feedback for countless sessions that have received Likert-scale scores well above 4 (on a 5-point scale), yet I knew deep down that some of these presentations weren’t as effective as they could be. How can we tell if the presentation was engaging and effective if the post-training evaluation scores are always high?

Several weeks ago I attended a Training Magazine-sponsored webinar on training metrics (view the recording here) and I was introduced to the idea of Net Promoter Score as a way to evaluate presentations. After some deliberation, my colleagues and I decided to test this concept out during a recent 2-day meeting. We added one question to our evaluation forms: Would you recommend this session to a colleague?

Following are the scores from our traditional question about whether people were exposed to new insights, information or ideas on a scale of 1-5 (5 representing “Strongly Agree”):

NPS 1

Not too shabby. Apparently we were exposing people to new insights, information and ideas! So that’s good, right? Who cares whether presenters were engaging or boring, stuck to their lesson plans or went off script, all the scores averaged between 4 and 5. Yay for us!

Then we took a look at the same sessions through the lens of Net Promoter Score, and this is what we found:

NPS 2

These scores showed some variance, but it didn’t tell much of a story until we put these scores side-by-side:

NPS 3

People may have been exposed to some new ideas or insights in each session, but would they put their own reputation on the line and recommend the session to any of their colleagues? It depends. There’s a huge difference between presentations that scored above 4.5 and presentations that drew an average of 4.2.

Here are three reasons why I think this matters:

1. A Wake-up Call. In the past, someone could walk away from a meeting with a score of 4.13 and think to himself: “Well, I wasn’t as sharp as I could have been, but people still liked that session, so I don’t really need to work to improve my delivery… and furthermore, who cares about all these adult learning principles that people keep telling me I need to include?!”

However, if that same presenter sees a Net Promoter Score of 6 or 19 or 31 (with a high score potential of 100), the reaction is very different. People suddenly seem a little more interested in tightening up their next presentation – rehearsing a little more seriously, having instructions for activities down cold, sticking more closely to their lesson plans.

2. Before On-the-Job Application, People Need To Remember Your Content. Some L&D practitioners care less about whether a presentation was engaging, instead being wholly focused on whether or not someone actually does something new or differently or better on the job. To this, I say: “Yes, and…”

Yes, better performance is the ultimate goal for most training programs.

And, you can’t do something new or differently or better if you weren’t paying attention to the presentation. You can’t do something better if you forgot what you learned before you go to bed that evening. While better job performance matters, the presenter plays a big role in whether or not people remember the content and are excited to use it when they return to their offices.

3. Marketing Matters. The principle objection to Net Promoter Score as a training evaluation tool, articulated very well in this post from Dr. Will Thalheimer, is that it is designed for marketing, not for training effectiveness. I would argue that L&D professionals must have some sort of marketing chops in order to generate interest in their programs. After all, Dr. Thalheimer also cited a significant body of research that “found that one misdirected comment by a team leader can wipe out the full effects of a training program.”  If influential people wouldn’t recommend your presentation, research shows that you have a problem.

What do you think? Is Net Promoter Score something you’ve used (or are thinking about using)? Or is it a misguided metric, not suitable for L&D efforts?

 

 

 

Too much interaction, not enough lecture? Impossible! Or is it?

Interaction

A little introduction to the topic. Here are a few discussion prompts. Break into small groups with table facilitators to guide the conversation. Large group de-brief. No bullet-pointed PowerPoint slides. Heck, no slides at all! This is a textbook example of well-designed training built upon a strong foundation of adult learning, right?

Not so fast.

Earlier this week I had an opportunity to attend a 60-minute session on the topic of measuring training impact. Training that has a measurable impact – it’s the holy grail of the learning and development profession, right? Sign me up. In fact, sign my colleagues up too! I dragged a colleague to this workshop as well. We need to learn as much as we can on this topic because we certainly haven’t found a consistent way to crack this nut.

During the session, a facilitator framed the topic then turned us loose in small groups to discuss the topic. In my own small group, I felt I was able to offer brilliant insights into the challenges we face when trying to isolate training as a reason for improved business results. I took a look around the room and everyone was engaged. The room was abuzz.

Toward the end, each small group reported their insights. Time expired, a little end-of-session networking took place, and then we all headed our own separate ways. It was fun.

Later, I reached out to my colleague who attended and asked about her take-aways. She said: “I don’t know that I took away any new/better way to measure training. How about you?”

The truth was, I didn’t have any concrete take-aways either. I was kind of hoping my colleague was going to mention something that I somehow missed.

Last week, during a #chat2lrn Twitter chat, Patti Shank took a lot of flak (including from me) when she wrote this:

When I reflected on the training experience I had this week, Patti’s words suddenly resonated for me. This training was ultra-engaging. And yet my colleague and I left without being able to do something new or differently or better.

Perhaps there should have been a more vigorous de-brief. Perhaps there should have been more instructor-led content, maybe even <gasp> lecture – either before or after the small group discussions.

I may not have new ways to measure the impact of my training initiatives, but I did carry three concrete take-aways from this experience:

  1. Sometimes, lecture isn’t completely evil.
  2. Sometimes, too many discussion-based activities can be counter-productive.
  3. Reflection is an essential habit following a learning experience. Even when concrete take-aways from the topic at hand prove to be elusive, learning can still happen.

And you? What kinds of things have you learned unexpectedly even though the actual topic at hand of a training session didn’t quite deliver for you? Leave your thoughts in the comments section.