Loading...
Tillbaka

How organizations evaluate training programs – thesis at Chalmers University of Technology in collaboration with Knowly

Av:
Carl-Adam Hellqvist
Co-founder, Knowly
LinkedIn

How are organizations evaluate their training?

In 2022, Jack Ahlkvist and David Larsson set out to answer that question for their master’s thesis in Learning and Leadership at Chalmers University of Technology, in collaboration with Knowly.

In the study, 17 organizations (all of them Knowly clients) were asked to describe how they currently evaluate their training programs. Their responses were then compared to the more detailed data available for the organizations using Knowly as their primary evaluation tool.

The study was based on the evaluation framework Kirkpatrick’s Four Levels of Training Evaluation, More on that in a moment.

The results in short

  • Participant satisfaction (Kirkpatrick level 1) was the most commonly evaluated factor (87%) among the organizations polled.
  • Around half of the organizations claimed to evaluate learning/knowledge transfer (level 2) and new behaviors connected to their training programs (level 3) (50% and 63% respectively).
  • 6% of the organizations that participated in the study said they measure the link between training and business results (level 4).

Jack Ahlkvist has this to say about the results:

“The results were expected, but modern studies on evaluation of training are in short supply. Therefore, our results, along with other studies in the same field, offer a unique basis on which to discuss how corporate evaluation practices impact what kind of training is being conducted.”
Jack Ahlkvist (left) says that while the results of the study were expected, they still form an important basis for future studies.

On Kirkpatrick’s Four Levels of Training Evaluation

The best way to get a standardized measure of how the participating organizations evaluate their training was to compare their evaluation methods to an established framework.

Among the frameworks available, the Kirkpatrick model has become somewhat of an industry standard, which made it the best fit. In short, the model describes four levels of training evaluation, where the lower levels are more closely linked to the training itself, while the higher levels focus more on the results of the training.

Level 1: Reaction

This level evaluates how the participants felt during the training program and what they thought about it. Classic questions along the lines of “On a scale of 1–10, how happy are you with the training?” belong in this category.

Level 2: Learning

The second level evaluates what knowledge the participants have actually gained. Tests and quizzes at the end of training or shortly after fall under this category.

Level 3: Behavior/transfer

Level three measures whether the training has actually caused participants to change their behavior. At this point, questionnaires become less reliable as the sole basis of evaluation, but self-scoring can still give some idea of how much pre-defined behaviors are being implemented.

Level 4: Results

The fourth and highest level aims to measure what impact the training has had on business-critical results.

Examples of measures: Sales increase, various measures of wellbeing in co-worker surveys, customer feedback.

The problem of correlation vs causality often comes into play here – was the increase in sales a result of the summer heat or our sales training? Evaluation attempts can still be made, but with the caveat that while the results are likely due to the training, 100% causality cannot be guaranteed.

How the study was performed: Questionnaire based on the four levels

A questionnaire was sent to Knowly client organizations that had agreed to participate in the study. In the questionnaire, each organization was asked to explain how they currently evaluate their training programs.

Level one: Reaction

Explanation:

Evaluation based on the training participant’s experiences. Questions to the participant are usually presented in a survey and focuses on aspects like content, format and general impression.

For courses and programs where you use Knowly, do you make assessments based on Kirkpatrick’s level one, Reaction?

  • Yes
  • No
  • I don’t know

Here is a field for additional comments. Please share your thoughts and explain in what way you make assessments based on experiences.

Level two: Learning

Explanation:

Evaluation based on the training participant’s knowledge. This can be evaluated in different ways, usually through a knowledge exam. Written or oral presentations are another common method.

For courses and programs where you use Knowly, do you make assessments based on Kirkpatrick’s level two, Learning?

Introduction to the questionnaire that the study participants filled out. The questions examined to what degree the respondents assessed their programs according to the four Kirkpatrick levels.

Level 1 was the most common, but it’s not that simple

The study found that the most common form of evaluation was Kirkpatrick’s level 1 – Reaction, which is in line with previous larger-scale studies on this subject.

The study doesn’t answer why experience is still the most common form of assessment, or why only a fraction of organizations make assessments at level 4.

Jack Ahlkvist explains that while you might come to the conclusion that the glut of level 1 evaluation is a failure, in reality it’s not actually that simple.

“Previous studies showing similar trends have criticized assessments that don’t measure Kirkpatrick’s first level, as it’s also turned out that you can’t prove a link between what participants experience and what they learn. I don’t think it’s as simple as re-shifting the focus of trainers and L&D departments.”

Incentive structures encourage measuring at level 1

Ahlkvist claims that incentive structures in organizations likely explain the tendency to measure level 1 more often than anything else. In most organizations, what affects trainers most is how well they perform in the classroom, not the results that the program produces for the organization.

“In order to understand these assessments we have to ask ourselves which numbers trainers and L&D departments are being measured on. Are any real demands being made in terms of learning and transfer?
As the participants’ experiences are the primary factor affecting trainers in the classroom, it stands to reason that it’s also the aspect that’s most interesting to assess. The effect of learning and transfer shows in the workplace, after the training program is over, and the demand for this kind of assessment should be coming from management."

In other words: Measuring at the higher levels is almost always something you want to do, but the responsibility should not fall solely on the trainer or the training department.

Measuring transfer requires that the organization makes demands

In order to actually get around to measuring at the higher levels, you need a symbiosis where management makes demands and clarifies what the expectations are: Business results and transfer are what will be followed up – not the classroom experience. Only then can individual trainers be expected to take more responsibility in actually making assessments at the higher levels.

“Fråga inte hur du kan motivera andra, fundera kring hur du kan skapa en miljö där de motiverar sig själva.”

Edward Deci

Missa inga nya artiklar!

På Knowly älskar vi att lära om lärande, och att dela med oss av våra kunskaper.
Fyll i din e-post, så hör vi av oss så snart vi publicerar nya artiklar.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.