Interested in a PLAGIARISM-FREE paper based on these particular instructions?...with 100% confidentiality?

Order Now

Using the attached article -which discusses Donald Kirkpatrick's Four level Evaluation Model, devised in the 1970s and a standard tool in training circles, and evaluate the value and limitations of the model in practice. How would application of such a model improve training in your organization? What ways could you evaluate results if you were devising a training program Great ideas revisited: Revisiting Kirpatricks's four-level model Kirkpatrick, Donald. Training & Development50. 1 (Jan 1996): 54. Abstract (summary) Abstract A present-day look at Kirkpatrick's (1958, 1959) 4-level model of training evaluation is presented. Over the years, much has happened in the literature and teaching of evaluation, but the content has remained basically the same. A few modifications in the guidelines for each of the 4 levels of the model are presented, and different forms and examples have been developed. The original 4 levels are: 1. reaction, 2. learning, 3. behavior, and 4. results. Despite whether management seems to care or demand proof of training's value, professionals should evaluate their training programs at as many of the 4 levels as possible.

Using the attached article -which discusses Donald Kirkpatrick’s Four level Evaluation Model, devised in the 1970s and a standard tool in training circles, and evaluate the value and limitations of the model in practice. How would application of such a model improve training in your organization? What ways could you evaluate results if you were devising a training program

Great ideas revisited: Revisiting Kirpatricks’s four-level model

Kirkpatrick, Donald. Training & Development50. 1 (Jan 1996): 54.

Abstract (summary)

Abstract

A present-day look at Kirkpatrick’s (1958, 1959) 4-level model of training evaluation is presented. Over the years, much has happened in the literature and teaching of evaluation, but the content has remained basically the same. A few modifications in the guidelines for each of the 4 levels of the model are presented, and different forms and examples have been developed. The original 4 levels are: 1. reaction, 2. learning, 3. behavior, and 4. results. Despite whether management seems to care or demand proof of training’s value, professionals should evaluate their training programs at as many of the 4 levels as possible.

Full Text

Beginning with the November 1959 issue of Training & Development (then called the Journal of the American Society of Training Directors), I published a series of four articles, “Techniques for Evaluating Training Programs.” Since then, I’ve written many articles and book chapters on evaluation and compiled 20 years’ worth of evaluation material in Evaluating Training Programs (American Society for Training and Development, 1975) and More Evaluating Training Programs (ASTD, 1986).

Over the years, a lot of things have happened in writing about and teaching evaluation. But the content has remained basically the same. I’ve made a few modifications in the guidelines for each of the four levels, as well as provided more and different forms and examples in my books. But the levels: reaction, learning, behavior, and results have remained constant.

It all started in 1952, when I decided to write my dissertation on “evaluating a supervisory training program.” In analyzing my goals for the paper, I considered measuring participants’ reaction to the program, the amount of learning that took place, the extent of their change in behavior after they returned to their jobs, and any final results that were achieved by participants after they returned to work. I realized that the scope of the research should be restricted to reaction and learning and that behavior and results would have to wait. Thus, the concept of four levels was born.

In the November 1959 article, I used the term “four steps.” But someone, I don’t know who, referred to the steps as “levels.” The next thing I knew, articles and books were referring to the four levels as the Kirkpatrick model.

Defining the our levels

In 1993, my friend and colleague Jane Halcomb urged me to write a book describing the model. She said that many people were interested in it but had trouble finding details. The book, Evaluating Training Programs: The Four Levels (Berrett-Koehler, San Francisco, California, 1994), uses case studies to show how the four levels can be implemented–from such companies as Motorola, Arthur Andersen, and Intel.

Some articles have said that the four-level model is too simple. “The Flawed Four-Level Evaluation Model,” written by Elwood F. Holton of Louisiana State University, will be published in Human Resource Development Quarterly this spring. Holton says that the model isn’t a model at all but a taxonomy, a classification. Perhaps he is correct. I don’t care whether it’s a model or taxonomy as long as training professionals find it useful in evaluating training programs.

People have asked me why the model is widely used. My answer: It’s simple and practical. Many trainers aren’t much interested in a scholarly, complex approach. They want something they can understand and use. The model doesn’t provide details on how to implement all four levels. Its chief purpose is to clarify the meaning of evaluation and offer guidelines on how to get started and proceed.

For those of you who are unfamiliar with the four levels, it’s time to describe them.

Level I: Reaction. This is a measure of how participants feel about the various aspects of a training program, including the topic, speaker, schedule, and so forth. Reaction is basically a measure of customer satisfaction. It’s important because management often makes decisions about training based on participants’ comments. Asking for participants’ reactions tells them, “We’re trying to help you become more effective, so we need to know whether we’re helping you.”

Another reason for measuring reaction is to ensure that participants are motivated and interested in learning. If they don’t like a program, there’s little chance that they’ll put forth an effort to learn.

Level 2: Learning. This is a measure of the knowledge acquired, skills improved, or attitudes changed due to training. Generally, a training course accomplishes one or more of those three things. Some programs aim to improve trainees’ knowledge of concepts, principles, or techniques. Others aim to teach new skills or improve old ones. And some programs, such as those on diversity, try to change attitudes.

Level 3: Behavior. This is a measure of the extent to which participants change their on-the-job behavior because of training. It’s commonly referred to as transfer of training.

Level 4: Results. This is a measure of the final results that occur due to training, including increased sales, higher productivity, bigger profits, reduced costs, less employee turnover, and improved quality.

Evaluation becomes more difficult, complicated, and expensive as it progresses from level 1 to level 4–and more important and meaningful. Some trainers bypass levels 1, 2, and 3 and go directly to level 4. Recently, I was asked by trainers in a consulting organization to skip a discussion of the first three levels and tell them how to do level 4 because that’s what their customers want to know. I replied that understanding all four levels is necessary and that there are no easy answers for knowing how to measure results.

The guidelines (see box) were never intended to describe exactly what to do and how to do it. But they do provide an overview of the four levels and how to proceed. (Box omitted)

Whether it’s called “Techniques for Evaluating Training Programs” or Evaluating Training Programs: The Four Levels, it’s essentially the same story. Each source describes the following reasons for evaluating training programs:

* to decide whether to continue offering a particular training program

* to improve future programs

* to validate your existence and job as a training professional.

If the time, money, and expertise are available, it’s important to proceed through all four levels without skipping any. In some organizations, senior managers pay little attention to the training function. As long as they don’t get negative vibes, they tend not to interfere or ask questions. But during times of downsizing, management must terminate people. Sometimes, the trainers are deemed expendable. The benefits from training may outweigh the costs, but unfortunately, proof can be difficult, if not impossible, to get.

Despite whether management seems to care or demand proof of training’s value, training professionals should evaluate their programs at as many of the four levels as possible. In order to do that, they must learn as much as they can about evaluation. Understanding the four levels is a good start.

Implementation Guidelines

Here are guidelines for measuring each of the levels.

Reaction

* Determine what you want to find out.

* Design a form that will quantify reactions.

* Encourage written comments and suggestions.

* Attain an immediate response rate of 100 percent.

* Seek honest reactions.

* Develop acceptable standards.

* Measure reactions against the standards and take appropriate action.

* Communicate the reactions as appropriate.

Learning

* Use a control group, if feasible.

* Evaluate knowledge, skills, or attitudes both before and after the training. For example, use a paper-and-pencil test to measure knowledge and attitudes and a performance test to measure skills.

* Attain a response rate of 100 percent.

* Use the results of the evaluation to take appropriate action.

Behavior

* Use a control group, if feasible.

* Allow enough time for a change in behavior to take place.

* Survey or interview one or more of the following groups: trainees, their bosses, their subordinates, and other who often observe trainees’ behavior on the job.

* Choose 100 trainees or an appropriate sampling.

* Repeat the evaluation at appropriate times.

* Consider the cost of evaluation versus the potential benefits.

Results

* Use a control group, if feasible.

* Allow enough time for results to be achieved.

* Measure both before and after training, if feasible.

* Repeat the measurement at appropriate times.

* Consider the cost of evaluation versus the potential benefits.

* Be satisfied with the evidence if absolute proof isn’t possible to attain.

Donald Kirkpatrick is professor emeritus of the University of Wisconsin. You can reach him at 190 Hawthorne Drive, Elm Grove, WI 53122. Phone: 414/784-8348.

To purchase reprints of this article, call ASTD Customer Service, 703/683-8100. use priority code KEA.

Copyright American Society for Training and Development Jan 1996

Interested in a PLAGIARISM-FREE paper based on these particular instructions?...with 100% confidentiality?

Order Now