Here is a question. What is the difference between a rat looking for food in an experimenter’s maze and me when I attend a training course? Answer: the rat does not have to fill in a happy sheet.
Oh, happy rat. Happy sheets have become so much a part of training that we probably think that an event is not training if we don’t have to evaluate it. The happy sheet is of course a key part of an evaluation based on Kirkpatrick’s framework … and in many cases is the only time we do try and assess the value of the training.
Kirkpatrick’s framework has been around since the 1950s, and is one of those strands of training design theory which emerged in the immediate post-WW2 period, a time when training changed from being a supervisor-led, ‘sitting with Nellie’ activity into a more specialised and sophisticated corporate activity. Those early strands also drew heavily on a behaviourist approach to learning, which is where the rat comes in. Success is when the rat is able to: “locate the food provided within 60 seconds”. Compare that to training on a computer system where the objective is that learner is able to: “locate customer details within 60 seconds”.
But surely the value of training is more than being able to negotiate some aspect of the corporate maze? It seems not: in an otherwise very useful paper written in 2012, a team including the respected academic Eduardo Salas wrote “… our final recommendation with respect to training evaluation is to use precise affective, cognitive, and/or behavioral measures that reflect the intended learning outcomes” (Salas et al., 2012, p. 91).
Now, there is nothing wrong in doing this, it’s just that it is so limiting about what training is or can be. Even if we assume that these objectives (or outcomes) are based on a thorough needs analysis which has identified where affective, cognitive or behavioural changes will make things better (always done, of course) and that these needs have been captured within well-written, focused objectives (again, a certainty), using them as a basis for assessing the value of the training is likely to be inadequate.
Why? Well, one reason is that these two assumptions may not be true (being honest, just how frequently is training based on a thorough needs analysis process?). Then, the objectives will be based on a situation from the past, and so may be less relevant by the time the training is delivered or made available. Also, by focusing our investigation on the changes specified in the objectives we will quite possibly overlook other changes which have emerged out of the training experience.
Which is where this gets interesting. Practitioner text books and academic papers alike all extol the potential benefits of training, but do we consider them in an evaluation based around behavioural objectives?
So what might these be? Salas and his team note that modern organisations must succeed in three domains if they are to thrive: finance, products or markets, and human capital. ‘Human capital’ is a term which has many definitions, but broadly it refers to the capacity of the workforce to engage with its operational environment, and this is clearly a domain where training has a part to play. Indeed, a great advantage of training is that it creates change which is to some degree invisible to competitors, unlike finance and products, which are clear for all to see.
Human capital has different elements: intellectual capital which refers to the knowledge held within the organisation and the social capital which comes out of the strength of relationships between its people. Of course, neither of these forms of capital is particularly easy to measure, although the emerging field of Social Network Analysis is helping us to develop a better understanding about interpersonal connections and information flows. But as we trainers have shown little interest in exploring these measures of value, it is not surprising that there is limited experience in using the tools which are available.
Then again at the individual level, training has the potential to improve organisational and career commitment, and to strengthen job involvement and job satisfaction. These may be easier to measure in an evaluation project, but we first of all need to remember that they are important factors to consider and this may not happen if we follow an objective-based evaluation process.
It is not as if existing paradigms were easy to implement. Referring back to the Kirkpatrick framework, it is still very difficult, and perhaps impossible, to work out confidently just how a training programme contributes to impact at an organisational level, and yet because it is an objective, we try.
The objectives-based approach with its roots in behaviourism also seems something of an anachronism in an age where technology makes a constructivist approach to learning so much easier, and where the possibilities for externalising tacit knowledge held within the workforce in order to integrate it into the known reserves of intellectual capital through informal learning is so much greater than ever before.
So what does this mean for training evaluation? It surely means that we have to look much more broadly at what value a training programme can and does bring to individuals and organisations. The Kirkpatrick framework still has relevance but we need to move beyond the usual behaviourist focus on objectives and think about the real value that learning can offer.
Sir Geoffrey Vickers argued that people are not really interested in achieving a goal or an objective per se: what really mattered was strengthening or improving in some way their relationship with their environment. Training evaluation should reflect on that wisdom, otherwise we reduce learners to being rats in a corporate maze.
Salas, E., Tannenbaum, S. I., Kraiger, K. and Smith-Jentsch, K. A. (2012) ‘The science of training and development in organizations: What matters in practice’, Psychological science in the public interest, vol. 13, no. 2, pp. 74–101.