Evaluation - The systems thinking and training blog - Bryan Hopkins

Go to content

Main menu:

What is the value when we evaluate training?

Published by in Evaluation ·
 
Here is a question. What is the difference between a rat looking for food in an experimenter’s maze and me when I attend a training course? Answer: the rat does not have to fill in a happy sheet.

Oh, happy rat. Happy sheets have become so much a part of training that we probably think that an event is not training if we don’t have to evaluate it. The happy sheet is of course a key part of an evaluation based on Kirkpatrick’s framework … and in many cases is the only time we do try and assess the value of the training.

Kirkpatrick’s framework has been around since the 1950s, and is one of those strands of training design theory which emerged in the immediate post-WW2 period, a time when training changed from being a supervisor-led, ‘sitting with Nellie’ activity into a more specialised and sophisticated corporate activity. Those early strands also drew heavily on a behaviourist approach to learning, which is where the rat comes in. Success is when the rat is able to: “locate the food provided within 60 seconds”. Compare that to training on a computer system where the objective is that learner is able to: “locate customer details within 60 seconds”.

But surely the value of training is more than being able to negotiate some aspect of the corporate maze? It seems not: in an otherwise very useful paper written in 2012, a team including the respected academic Eduardo Salas wrote “… our final recommendation with respect to training evaluation is to use precise affective, cognitive, and/or behavioral measures that reflect the intended learning outcomes” (Salas et al., 2012, p. 91).

Now, there is nothing wrong in doing this, it’s just that it is so limiting about what training is or can be. Even if we assume that these objectives (or outcomes) are based on a thorough needs analysis which has identified where affective, cognitive or behavioural changes will make things better (always done, of course) and that these needs have been captured within well-written, focused objectives (again, a certainty), using them as a basis for assessing the value of the training is likely to be inadequate.
 
Why? Well, one reason is that these two assumptions may not be true (being honest, just how frequently is training based on a thorough needs analysis process?). Then, the objectives will be based on a situation from the past, and so may be less relevant by the time the training is delivered or made available. Also, by focusing our investigation on the changes specified in the objectives we will quite possibly overlook other changes which have emerged out of the training experience.

Which is where this gets interesting. Practitioner text books and academic papers alike all extol the potential benefits of training, but do we consider them in an evaluation based around behavioural objectives?

So what might these be? Salas and his team note that modern organisations must succeed in three domains if they are to thrive: finance, products or markets, and human capital. ‘Human capital’ is a term which has many definitions, but broadly it refers to the capacity of the workforce to engage with its operational environment, and this is clearly a domain where training has a part to play. Indeed, a great advantage of training is that it creates change which is to some degree invisible to competitors, unlike finance and products, which are clear for all to see.

Human capital has different elements: intellectual capital which refers to the knowledge held within the organisation and the social capital which comes out of the strength of relationships between its people. Of course, neither of these forms of capital is particularly easy to measure, although the emerging field of Social Network Analysis is helping us to develop a better understanding about interpersonal connections and information flows. But as we trainers have shown little interest in exploring these measures of value, it is not surprising that there is limited experience in using the tools which are available.

Then again at the individual level, training has the potential to improve organisational and career commitment, and to strengthen job involvement and job satisfaction. These may be easier to measure in an evaluation project, but we first of all need to remember that they are important factors to consider and this may not happen if we follow an objective-based evaluation process.

It is not as if existing paradigms were easy to implement. Referring back to the Kirkpatrick framework, it is still very difficult, and perhaps impossible, to work out confidently just how a training programme contributes to impact at an organisational level, and yet because it is an objective, we try.

The objectives-based approach with its roots in behaviourism also seems something of an anachronism in an age where technology makes a constructivist approach to learning so much easier, and where the possibilities for externalising tacit knowledge held within the workforce in order to integrate it into the known reserves of intellectual capital through informal learning is so much greater than ever before.

So what does this mean for training evaluation? It surely means that we have to look much more broadly at what value a training programme can and does bring to individuals and organisations. The Kirkpatrick framework still has relevance but we need to move beyond the usual behaviourist focus on objectives and think about the real value that learning can offer.

Sir Geoffrey Vickers argued that people are not really interested in achieving a goal or an objective per se: what really mattered was strengthening or improving in some way their relationship with their environment. Training evaluation should reflect on that wisdom, otherwise we reduce learners to being rats in a corporate maze.

 
 
Reference
 
Salas, E., Tannenbaum, S. I., Kraiger, K. and Smith-Jentsch, K. A. (2012) ‘The science of training and development in organizations: What matters in practice’, Psychological science in the public interest, vol. 13, no. 2, pp. 74–101.
 



How systems thinking can help us evaluate training

Published by in Evaluation ·
 
I am pretty sure that almost all of the terms of reference I have seen for training evaluation projects have said that the training needs to be evaluated against the four levels of reaction, learning, behaviour and results; in other words, our old friend Kirkpatrick’s framework. The great man himself passed away several years ago, but I am sure he would have been delighted to have created a structure which, more than 60 years after he developed it for his Ph.D. thesis, remains the standard for training evaluation.

And yet… although it is the standard, there is an ongoing discussion within the training world about how difficult it is to evaluate training and that, in practice, the four level framework does not really tell us how to evaluate training. Maybe that is asking too much of what is, in essence, a taxonomy, a classification of what we can evaluate. Kirkpatrick’s framework is like a trainspotter’s catalogue of engine numbers: essential if you want to know what you are looking at, but of little value in making sure the trains to run on time.
 
Of course, a lot of the problem is that we treat Kirkpatrick’s framework as if it were a model about how training works rather than just a framework. This linear theory of change summarises what this model proposes.



Granted, it does look very plausible: if we enjoy a training experience we will learn, and then apply the new knowledge and skills, our work performance will improve and our organisation will benefit. The problem is, and this is borne out by extensive research, is that this is just not necessarily true. We learn from unpleasant experiences as well as pleasant ones, perhaps even more so? Just because we learn does not mean that we apply new knowledge and skills: our workplace environment may stop this from happening. Our performance may improve not because of what we have learnt but because we enjoyed some time away from the routine of everyday working life. And, of course, a myriad of factors other than improved knowledge, skills and attitudes may have improved organisational performance. The model is too simplistic, and ignores external factors which affect learning, application and organisational performance, all of which can be much more significant influences than training. Kirkpatrick’s own writings [1] do acknowledge the existence of these external factors but do not address them in any comprehensive way.

So the training evaluator setting forth to satisfy the terms of reference armed only with Kirkpatrick’s framework, is not actually very well equipped. Now, there are many guides to training evaluation available, but few approach training from a systems thinking perspective, even though this is a methodology which I have found in my own professional practice to be extremely powerful. But what do we actually mean by a systems thinking perspective?

We can look at three main principles underpinning a systems thinking approach: a questioning of boundaries, exploring different perspectives and investigating relationships within and outside the system of interest. Let us explore each of these in more detail, defining the training intervention as ‘a system to improve organisational performance by increasing levels of knowledge and skill’.

 
First, we question the boundaries. A boundary can be many things, but in essence is something which includes and excludes. In a training system this covers many things. For example, in our theory of change above we have included the training activity as contributing to individual and organisational performance but we have excluded consideration of other relevant factors, such as the climate for transferring learning and external factors constraining performance. The boundary can be a decision about who receives training and who does not; who makes the decision about who is trained and what they learn and who does not make the decision; what is included in the training materials and what is left out. These are all extremely important questions which should have been determined by a training needs analysis, but such analyses are often not conducted in any rigorous or systematic way, which effectively means that the boundaries embodied by the terms of reference can be to some extent quite arbitrary.

 
Secondly, what different perspectives are there about the training? In most cases organisational training is implemented in order to improve a complex situation. For example, improving the level of sales is not just simply a matter of improving sales skills: what can be sold depends on what customers want to buy, what competitors are offering, on local and national economic factors, and so on. Should we consider the perspective of people outside the organisation? Different people within the organisation may have very different ideas about what is ‘the real problem’: is it a lack of sales skills, is it not understanding what competitors are offering, is it not understanding how the marketplace is evolving, and so on. Adopting a systems thinking approach helps the evaluator to question whether all valid perspectives about the situation are being considered.

 
Thirdly, relationships. These may be relationships between individuals and groups within the organisation or between these individuals and groups and entities outside the organisation, within its environment. The relationship may be between the training and time: does the situation which stimulated the demand for training still exist in the same form, or has it changed to make things better or worse? How has the training changed the relationships between participants and non-participants? Have unexpected relationships developed which could have a positive or negative impact? Finally, does the training ignore relationships? Most work that people do is as part of a team, and yet often training is aimed at individuals, running the risk of ignoring the importance of team interactions.
 
Of course, an experienced training evaluator may be asking these questions already. But for someone starting out in training evaluation following a systems thinking approach can provide an extremely valuable structure within which to work. There are a number of systems thinking methodologies which are appropriate to training evaluation, and although it is beyond the scope of this article to look at these in detail, it would be useful to describe briefly some of them so that, if you are interested, you can research further.

Soft Systems Methodology (SSM) is a tool which is very useful for exploring different perspectives about a situation. It makes it possible to consider the implications of different world views about an organisational problem, and what actions need to be taken in order to improve matters.

Critical Systems Heuristics (CSH) is a tool for exploring boundaries, and considering how different power relationships lead to decisions being made about what or who to include or exclude.

The Viable Systems Model (VSM) is a cybernetic approach which looks at the relationships that are needed between different functions within an organisation if things are to work smoothly. By comparing and contrasting an ideal model with what the training is doing, it is possible to identify its strengths and weaknesses.

This article has tried to explain how systems thinking can contribute to improved training evaluation. It is not a replacement for Kirkpatrick’s framework or for the enhancements which have been made over the years, such as Phillips’ Return on Investment or Brinkerhoff’s Success Case Method. What systems thinking approaches can do is to provide a structured way within which these ideas can be explored, and sense made of what research shows is happening. It also ensures that we see training as just one of the factors influencing what people do and how well they do it, helping us to design training which works with the complex dynamics in any professional arena.

[This post was also published as a LinkedIn article]
 

 
   
 
[1] For example, "Evaluating Training Programs: The Four Levels", Kirkpatrick, D.L., 1994, Berrett-Koehler.
 
 



Using theory-based approaches to evaluate training

Published by in Evaluation ·
I was recently invited by the British Institute for Learning & Development to contribute something to their blog. I decided to write a piece about using theory-based evaluation methodologies to evaluate training, as an improvement over existing Kirkpatrick-based approaches.

Rather than repeat myself here or try to edit what I have already written, here is the link to the relevant BILD blog.



Using the OECD-DAC criteria for training evaluation

Published by in Evaluation ·
I was recently reading through the specifications for a training evaluation project and was somewhat surprised by the absence of any reference to Kirkpatrick and his famous framework. Instead, the criteria for the training were framed around those used in the development and humanitarian sectors, commonly known as the OECD-DAC criteria (from the Development Assistance Committee of the Organisation for Economic Cooperation and Development). These criteria are relevance, effectiveness, efficiency, impact and sustainability.

Although I am familiar with these criteria from my work in these sectors, it is interesting to reflect on their absence from the training world, where criteria for training effectiveness come from objectives specified at the beginning of the training design process, and are often structured around the SMART concept. Although training of trainers courses often recommend the use of SMART objectives when specifying training, I have always found the structure somewhat unsuited to training. According to Wikipedia, the first use of SMART is attributed to George T Doran in 1981, where they were put forward as a way to set goals for management activities. This isn't the place to get into the pros and cons of management by objectives, but while they may have been suitable for this purpose, in my opinion they just don't work for training.

So where I have been able to, I have always tried to use Robert Mager's three-part objectives structure: performance (the observable verb), conditions (where the performance is carried out) and criteria (measure of success for the performance). This is much more usable for designing training, but it is still very difficult to use this structure for defining some overall success criteria for a training programme. In my time I've seen many training programmes designed around objectives (both SMART and Mager-type) where it would be impossible to draw any conclusions about success, because of the complexity of real-world performance and measures of success. Often this is down to training objectives being written in such a way that suggests the aim of the training is to cause major behavioural or organisational changes. The writer of the objectives may realise that other strategies need to be implemented to support training, but this may not be explicitly stated in a comprehensive training needs analysis. But if the wording of the training objective is not realistic as far as real-world behaviour is concerned, the objective cannot be used as the basis for an evaluation.

Which brings us back to the criterion problem. I think the five OECD-DAC criteria have much to offer. Here are a few questions relevant to each of the criteria which would be useful in a training evaluation.

Relevance. How good is the match between what the training programme contains and the target audience need, in terms of content and process? Is the programme relevant to the overall vision of the organisation?

Effectiveness. Have the people learnt what they were supposed to have learnt? How well have they been able to transfer this learning to the workplace? What may have obstructed implementation of their learning?

Efficiency. What alternatives would there be for achieving the desired change in behaviour or impact? Could the training have been delivered in a more cost-effective way?

Impact. What changes have there been in behaviour and performance? How might training have contributed to this change?

Sustainability. How sustainable is the training programme (cost, logistics, etc.)? How sustainable is the change in behaviour or performance? Will things go back to normal when people forget the novelty of the training programme?

There are of course a lot of questions which could be asked under each heading. For me, one of the interesting things about using the OECD-DAC criteria is how much more systemically the evaluation process becomes. It encourages us to think about the history and process of the training and not just to focus on impact or behaviour or the narrow requirements of a SMART objective. The criteria may have been designed for a different sector, but they have a lot to offer the training world.



Designing better smile sheets: essential reading

Published by in Evaluation ·
I have just been reading a new book by Will Thalheimer called "Smile Sheets", and an excellent read it is too.

If you don't know Will, he specialises in reading academic research about learning and thinking about how this can contribute to training. Training is, of course, an area where there are various strange practices based on mythical facts. One of my favourites is the cone of experience, the claim that we remember 10% of what we read, 20% of what we hear and so on. Claims such as this are presented in training, and because they seem to make some sense get repeated, and slowly becomes fact. This particular topic is one that Will has discussed in the past, and I can recommend a visit to his blog to learn more (www.willatworklearning.com/).

Anyway, he focuses in his latest book on the smile sheet, or to give it its polite name, the reaction questionnaire (a la Kirkpatrick). Although this is the bedrock of most training evaluation activities, the book discusses in some detail the lack of research data to prove that it is meaningful in any way. This is because of a number of different factors. One is that the types of questions often included in reaction questionnaires are often poorly constructed from a statistical point of view, and force the learner into giving a positive response. Another is that surveys conducted in the training environment while the training is still under way are heavily influenced by the fact of being there and because there is no time for reflection on what the training has been about. Finally, there is very little evidence to show that merely reacting positively to a training activity means that there will be learning, which is a fundamental principle in the Kirkpatrick framework, which, of course, underlies much thinking in training evaluation.

Will then goes on to talk about what learning actually means and provides a practical guide to how to design 'smile sheets' which can actually produce meaningful and useful data. It is a most entertaining and illuminating read, and I certainly wish that I had read it before sending my own manuscripts to my publisher!

If you do get involved in any way with training evaluation, buy yourself a copy. At $25, it's well worth it.



Update on my forthcoming book

Published by in Evaluation ·
This is a slightly different entry, but is an update with information on my new book, "Learning and Performance: A Systemic Model for Analysing Needs and Evaluating Training ".

This is a practical guide for using systems thinking concepts such as boundary definitions, multiple perspectives and relationships in carrying out training needs analyses and programme evaluation.

It explains how to use techniques such as the Viable Systems Model and Soft Systems Methodology to explore areas of concern in organisational performance, in order to identify a holistic set of solutions which can improve performance. In the case of evaluating training, it uses these tools to provide a practical approach to evaluating both the learning and impact of training.

The book will be published by Routledge in late 2016 with an expected cover price of about £45. However, after some discussions with my publisher, we have decided to take pre-publication orders at the heavily discounted price of £20.00. That looks like a very good deal.

If you are interested, all you have to do to register a pre-publication order is click here to contact Jonathan at Routledge: jonathan.norman@tandf.co.uk.




Evaluating the unintended or unexpected

Published by in Evaluation ·
Last week I was asked for some advice about questions to ask when carrying out a Level 3 evaluation (in Kirkpatrick-speak). Conventional wisdom is that this is about behaviour change, whether or not people change their behaviour in line with the objectives of the training. And of course, when we are trying to do a Level 3 evaluation we are also interested in what impact any changes of behaviour have at a team or organisational level (the fabled Level 4).

All well and good, but one of the possible weaknesses in traditional approaches to training evaluations which becomes apparent when using a systemic perspective is that because they start with the reference point of the learning objectives, evaluation against the learning objectives can end up being the only thing that is done, when, of course, training can have other consequences.

There are basically two sorts of other consequences. The first is a Level 3 change, what other behaviours have changed as a result of the training? For example, has there been any change in the way people interact with each other, perhaps talking about the training course, questioning how useful it has been, what implications it has for everyday practice? Has this improved what people do, in the subject of the training or in other areas? This is all about different aspects of knowledge management, about the impact of the formal training on informal learning practices.

The second is at Level 4, what are the outcomes at the team or organisational level and what impact does this have? If we don't think about what changes the training makes on a team, then it becomes harder to look at impact that it might then enable. For example, people who attend a training course may get to know each other better, may develop trust and therefore become better team players in the future: this is an example of improving 'social capital', a somewhat nebulous but nevertheless very important factor in improving organisational effectiveness, but which may only become apparent when there is a lack of it! Of course, the implications of this may be much longer term than a straightforward observable output, such as level of sales or of widgets manufactured, and it may also be harder to measure, but this does not mean that it is any less important. If we do not recognise it as important it is unlikely that we will even try to measure it.

For example, if training does lead to an increase in levels of widgets produced over a six-month period but leads to increased workforce dissatisfaction with potential longer-term implications, what is its true value?





Copyright 2015. All rights reserved.
Back to content | Back to main menu