Level 1 evaluation: gauging the learner’s reaction to training.
Why is this such a tough thing to get right?
My world used to come crashing down when participants would score me anything under a 4 on a 5-point scale.
Then I started reading up on post-training evaluation forms and learned that there was no correlation between high scores and on-the-job performance, so I stopped doing them completely.
Then my boss insisted I do evaluation forms, so I tried to make them more meaningful. I added a net promoter score component. I read Will Thalheimer’s excellent book on the subject and tried to simply use every sample question he offered in the book. Participants revolted at an evaluation form that took 20 minutes to complete.
A few weeks ago I reached deep into my bag of tricks and tried something new.
I asked myself: what data do I really want (or need) from the evaluation form?
It was a train the trainer session, so I wanted to know if people felt confident delivering the material after the session. I wanted to know what questions they still had. I wanted to know what they found most valuable. And I wanted to know if there’s anything else I could do in the future to help other learners grow in their confidence to deliver this training.
That’s all I wanted to know. So I created an evaluation form with four questions:
- After completing this program, I feel more confident in my ability to deliver a high quality training experience. (5-point scale with 1 being Strongly Disagree, 5 being Strongly Agree)
- What questions do you still have about this training program? (Open-ended)
- The most valuable part about this training program was… (Open-ended)
- One thing that could be included in this training program in order to help me feel more confident in my ability to train would be… (Open-ended)
The feedback from question 1 was overwhelmingly positive – almost everyone offered 4’s and 5’s. That felt good, but again, high scores don’t necessarily correlate to on-the-job transfer of skills.
Most of the questions people had were about product knowledge. In order to train people, the trainers wanted more product knowledge in order to be adequately confident. This was something we had already gauged during the training, but this feedback affirmed our observations. We’ve since adjusted the amount of time people will spend on product knowledge as we go forward.
What was the most valuable part? The answers to this question actually surprised us. Although every participant everywhere complains about role plays, the overwhelmingly predominant response to this question was that the role plays were the most valuable part of this training program. This affirmed for us the design decisions we made in crafting this course.
What else could be included to help participants feel more confident? Again the response was a heavier focus on product knowledge. This appeared in two open-ended questions and obviously weighed heavily on the minds of our participants. As I mentioned above, we’ve since revised the training to reduce the amount of time in certain areas in order to allocate additional time to product knowledge.
Four questions. Actionable data. I think I may have found something that’ll work for me going forward.
How about you? What have you found to be the most effective way to collect actionable feedback after a training session?