The systems thinking and training blog - Bryan Hopkins

Go to content

Main menu:

Using theory-based approaches to evaluate training

Published by in Evaluation ·
I was recently invited by the British Institute for Learning & Development to contribute something to their blog. I decided to write a piece about using theory-based evaluation methodologies to evaluate training, as an improvement over existing Kirkpatrick-based approaches.

Rather than repeat myself here or try to edit what I have already written, here is the link to the relevant BILD blog.



Using the OECD-DAC criteria for training evaluation

Published by in Evaluation ·
I was recently reading through the specifications for a training evaluation project and was somewhat surprised by the absence of any reference to Kirkpatrick and his famous framework. Instead, the criteria for the training were framed around those used in the development and humanitarian sectors, commonly known as the OECD-DAC criteria (from the Development Assistance Committee of the Organisation for Economic Cooperation and Development). These criteria are relevance, effectiveness, efficiency, impact and sustainability.

Although I am familiar with these criteria from my work in these sectors, it is interesting to reflect on their absence from the training world, where criteria for training effectiveness come from objectives specified at the beginning of the training design process, and are often structured around the SMART concept. Although training of trainers courses often recommend the use of SMART objectives when specifying training, I have always found the structure somewhat unsuited to training. According to Wikipedia, the first use of SMART is attributed to George T Doran in 1981, where they were put forward as a way to set goals for management activities. This isn't the place to get into the pros and cons of management by objectives, but while they may have been suitable for this purpose, in my opinion they just don't work for training.

So where I have been able to, I have always tried to use Robert Mager's three-part objectives structure: performance (the observable verb), conditions (where the performance is carried out) and criteria (measure of success for the performance). This is much more usable for designing training, but it is still very difficult to use this structure for defining some overall success criteria for a training programme. In my time I've seen many training programmes designed around objectives (both SMART and Mager-type) where it would be impossible to draw any conclusions about success, because of the complexity of real-world performance and measures of success. Often this is down to training objectives being written in such a way that suggests the aim of the training is to cause major behavioural or organisational changes. The writer of the objectives may realise that other strategies need to be implemented to support training, but this may not be explicitly stated in a comprehensive training needs analysis. But if the wording of the training objective is not realistic as far as real-world behaviour is concerned, the objective cannot be used as the basis for an evaluation.

Which brings us back to the criterion problem. I think the five OECD-DAC criteria have much to offer. Here are a few questions relevant to each of the criteria which would be useful in a training evaluation.

Relevance. How good is the match between what the training programme contains and the target audience need, in terms of content and process? Is the programme relevant to the overall vision of the organisation?

Effectiveness. Have the people learnt what they were supposed to have learnt? How well have they been able to transfer this learning to the workplace? What may have obstructed implementation of their learning?

Efficiency. What alternatives would there be for achieving the desired change in behaviour or impact? Could the training have been delivered in a more cost-effective way?

Impact. What changes have there been in behaviour and performance? How might training have contributed to this change?

Sustainability. How sustainable is the training programme (cost, logistics, etc.)? How sustainable is the change in behaviour or performance? Will things go back to normal when people forget the novelty of the training programme?

There are of course a lot of questions which could be asked under each heading. For me, one of the interesting things about using the OECD-DAC criteria is how much more systemically the evaluation process becomes. It encourages us to think about the history and process of the training and not just to focus on impact or behaviour or the narrow requirements of a SMART objective. The criteria may have been designed for a different sector, but they have a lot to offer the training world.



Designing better smile sheets: essential reading

Published by in Evaluation ·
I have just been reading a new book by Will Thalheimer called "Smile Sheets", and an excellent read it is too.

If you don't know Will, he specialises in reading academic research about learning and thinking about how this can contribute to training. Training is, of course, an area where there are various strange practices based on mythical facts. One of my favourites is the cone of experience, the claim that we remember 10% of what we read, 20% of what we hear and so on. Claims such as this are presented in training, and because they seem to make some sense get repeated, and slowly becomes fact. This particular topic is one that Will has discussed in the past, and I can recommend a visit to his blog to learn more (www.willatworklearning.com/).

Anyway, he focuses in his latest book on the smile sheet, or to give it its polite name, the reaction questionnaire (a la Kirkpatrick). Although this is the bedrock of most training evaluation activities, the book discusses in some detail the lack of research data to prove that it is meaningful in any way. This is because of a number of different factors. One is that the types of questions often included in reaction questionnaires are often poorly constructed from a statistical point of view, and force the learner into giving a positive response. Another is that surveys conducted in the training environment while the training is still under way are heavily influenced by the fact of being there and because there is no time for reflection on what the training has been about. Finally, there is very little evidence to show that merely reacting positively to a training activity means that there will be learning, which is a fundamental principle in the Kirkpatrick framework, which, of course, underlies much thinking in training evaluation.

Will then goes on to talk about what learning actually means and provides a practical guide to how to design 'smile sheets' which can actually produce meaningful and useful data. It is a most entertaining and illuminating read, and I certainly wish that I had read it before sending my own manuscripts to my publisher!

If you do get involved in any way with training evaluation, buy yourself a copy. At $25, it's well worth it.



Update on my forthcoming book

Published by in Evaluation ·
This is a slightly different entry, but is an update with information on my new book, "Learning and Performance: A Systemic Model for Analysing Needs and Evaluating Training ".

This is a practical guide for using systems thinking concepts such as boundary definitions, multiple perspectives and relationships in carrying out training needs analyses and programme evaluation.

It explains how to use techniques such as the Viable Systems Model and Soft Systems Methodology to explore areas of concern in organisational performance, in order to identify a holistic set of solutions which can improve performance. In the case of evaluating training, it uses these tools to provide a practical approach to evaluating both the learning and impact of training.

The book will be published by Routledge in late 2016 with an expected cover price of about £45. However, after some discussions with my publisher, we have decided to take pre-publication orders at the heavily discounted price of £20.00. That looks like a very good deal.

If you are interested, all you have to do to register a pre-publication order is click here to contact Jonathan at Routledge: jonathan.norman@tandf.co.uk.




Why I like 70:20:10

Published by in Informal learning ·
I have just finished reading the report "70+20+10=100: The Evidence Behind The Numbers", produced by Charles Jennings and Towards Maturity. A most interesting and worthwhile read.

For clarity, the 70:20:10 model refers to an observation that 70% of what people learn comes from real life and on-the-job experience, 20% from working with other people and 10% from formal training. These figures came from observations on leadership in a largely male target group, and, as the report acknowledges, different figures have been derived for female groups. Other research, not referred to in the report, discusses how 80% of what people learn comes from 'informal' means.

So there is some disagreement about the numbers, but this is largely because of the difficulties of actually defining what these categories of learning mean: what is the difference between 'on-the-job experience' and 'working with others'. What exactly is 'informal' learning?

But really, the numbers are not important. Why 70:20:10 is such a useful concept is that it provides a simple model around which people can conceptualise the importance of integrating formal and informal learning, which is something which the training industry has struggled with for many years. As the report says, its value is in helping people to realise that learning is a complex, multi-faceted activity, but that taking steps to facilitate non-formal learning opportunities and integrating them with formal training can bring rich rewards to organisations.

As I read the report I felt a strong sense of vindication that my own systems-based approach to analysing performance issues and developing training strategies is completely justified. Using a systems approach automatically means that we develop an understanding of each part of the 70:20:10 triad, about the dynamics of the workplace, how people work with each other and share information, what barriers and enablers may exist to implementing new knowledge and skills and so on. This can help us to design training which explicitly helps people to integrate their informal learning opportunities with training. Definitely a good thing!




The importance of boundary spanners

Published by in Reflections ·
I have not been able to write any blogs recently because all of my time has been taken up with finalising the manuscript for my new book. Provisionally entitled "Performance and evaluation: a systems thinking approach", I eventually managed to get everything packaged up sent off to Routledge, my publishers, by the end of last week and so this should hopefully now be published towards the end of 2016.

It has taken about two years to put the book together, pulling together experiences from work that I have done and my ongoing studies with the Open University. Taking that amount of time has some advantages but also means that what you learn over time makes the content of sections written early on seem somewhat inadequate. There is also the problem of changing writing styles as time goes by: what seemed a good way to do it in 2014 was not by 2016. These inconsistencies were exposed by the review process. I asked a number of trusted associates to look at the draft manuscript and their comments were invaluable, pointing out how the logic of the structure could be improved and identifying weaknesses in words, sentences and paragraphs.

It made me realise that a fundamental problem with the book was bringing together two academic disciplines, the training world and the systems thinking world. A core theme within the book is about learning as a networked activity, of connections of people in groups. One of the topics I discussed within the book is that of Social Network Analysis, a way of quantifying how people work together. Some early research within this field was by Mark Granovetter, whose article "The strength of weak ties" looked at how working-class people relied on social networks to find information about job opportunities. He saw that people have both strong ties, those with friends and family, and weak ties with friends of friends and infrequent contacts. His research showed that the people who were most useful for identifying work opportunities were actually the weak ties, because these people were connected with other networks. They functioned as 'boundary spanners', a term coined by Michael Tushman to describe people who form connections between networks, and play an important part in helping information and ideas to move one network to another.

Receiving feedback from both training professionals and systems people made me realise that my book will be operating as a boundary spanner, that it tries to communicate training ideas to system professionals and vice versa. Really, the target for the book is training professionals so the priority will be for them to develop an understanding of how systems thinking ideas can be useful.

It remains to be seen how useful as a boundary spanner my book will become!



Evaluating the unintended or unexpected

Published by in Evaluation ·
Last week I was asked for some advice about questions to ask when carrying out a Level 3 evaluation (in Kirkpatrick-speak). Conventional wisdom is that this is about behaviour change, whether or not people change their behaviour in line with the objectives of the training. And of course, when we are trying to do a Level 3 evaluation we are also interested in what impact any changes of behaviour have at a team or organisational level (the fabled Level 4).

All well and good, but one of the possible weaknesses in traditional approaches to training evaluations which becomes apparent when using a systemic perspective is that because they start with the reference point of the learning objectives, evaluation against the learning objectives can end up being the only thing that is done, when, of course, training can have other consequences.

There are basically two sorts of other consequences. The first is a Level 3 change, what other behaviours have changed as a result of the training? For example, has there been any change in the way people interact with each other, perhaps talking about the training course, questioning how useful it has been, what implications it has for everyday practice? Has this improved what people do, in the subject of the training or in other areas? This is all about different aspects of knowledge management, about the impact of the formal training on informal learning practices.

The second is at Level 4, what are the outcomes at the team or organisational level and what impact does this have? If we don't think about what changes the training makes on a team, then it becomes harder to look at impact that it might then enable. For example, people who attend a training course may get to know each other better, may develop trust and therefore become better team players in the future: this is an example of improving 'social capital', a somewhat nebulous but nevertheless very important factor in improving organisational effectiveness, but which may only become apparent when there is a lack of it! Of course, the implications of this may be much longer term than a straightforward observable output, such as level of sales or of widgets manufactured, and it may also be harder to measure, but this does not mean that it is any less important. If we do not recognise it as important it is unlikely that we will even try to measure it.

For example, if training does lead to an increase in levels of widgets produced over a six-month period but leads to increased workforce dissatisfaction with potential longer-term implications, what is its true value?





Learning styles: serious tool or parlour game?

Published by in Training design ·
I have recently been involved in looking at several different training of trainers events. Although the events have been for different target groups and in different sectors, in all cases some time in each course was spent on analysing (and subsequently referring back to) learning styles of different types, in particular those based around Kolb's experiential learning cycle.

Now, I've often done similar activities in my own training, and know that participants seem to find this kind of self-analysis quite fun and interesting ...  but is it just a bit of fun or is it really of significance?

I've started to ask this question more since I have been looking at training and learning from a systems thinking perspective. Every system has to have its own environment with which it has some sort of relationship, and this relationship influences the functioning of the system in some way. What does this mean if 'learning' is the system?

Thinking particularly about Kolb, his work comes from a humanistic psychology perspective, which means that he considers how a whole being behaves, and does not consider how that behaviour has come to be. This contrasts with more psychoanalytical approaches which seek to understand how a person's history (i.e. their environment) has affected their behaviour. So his cycle of experiential learning describes how some free-floating individual makes sense of new information, which is fine as we, when using the idea, can consider how what is going on around the individual might influence how it works.

However, quite a few writers have suggested that when we take Kolb's ideas further by saying that individuals have a preference for one or two of the stages in the learning cycle, that the humanistic approach creates a problem by ignoring the effects of the environment. Learning styles questionnaires work by asking people to reflect on how they learn in different situations, and they then receive some sort of summary as having one or two 'preferred' learning styles. Their contention is that this analysis is only valid for the situations considered in the questionnaire and at that moment in time, so for different situations or at another time the individual might respond quite differently. Which means that there may be no such thing as a person's always preferred learning style, only a preference at a given moment. Which makes the questionnaire a bit pointless ...

For me I know that I approach new learning situations differently depending on various factors, such as what the situation is, how familiar I am with it as a general class, how much time I have, how well I need to be able to respond, and so on.

So I'm left feeling that learning styles might be a bit of fun to talk about, but that pinning "Activist" or "Reflector" badges on people might be at best a bit of a waste of time, or worse, misleading and perpetuating one of training's great myths.



Training needs analyses: do they exist?

Published by in Reflections ·
When I was doing the research for my upcoming book on training needs analysis and systems thinking I came across an article in the Journal of Applied Psychology (see reference below) which summarised a meta-analysis looking at what factors seem to influence the success of training programmes. One statistic which caught my eye was that their data suggested that only 6% of training programmes were based on a training needs analysis. 6%, not many!

The authors of the study did point out that it was often not clear what a 'training needs analysis' constituted, and that their research looked at published studies, so it was possible that in the 'real world' organisations were indeed carrying out needs analysis activities. So, to try and get some different perspectives on this I asked the question in one of my LinkedIn groups: "Training needs analyses: do they exist?".

Very quickly the question attracted over 100 comments from many different people, and they are still coming, so clearly the question was of interest, and in general showed a lot of frustration with the current situation within organisations as far as conducting needs analyses is concerned.

With so many comments, it is difficult to control specific conclusions, but there were a number of common threads which appeared during the course of the conversation.

Essential but not happening. Many people agreed with my initial proposition that while TNAs are universally said to be essential, they are often not carried out in any significant way.

TNAs take too much time. Organisations want quick results and running a training course is a quick solution (although of course it does not guarantee quick results, which many people pointed out).

What is a TNA? Quite a few people discussed the difference between a training analysis and a performance analysis, seeing the performance analysis as something which came first, to identify what factors are affecting performance, followed up by the training analysis to decide how training can contribute. Interestingly, several of these comments mirrored what I have seen in the standard TNA literature, that these are sequential events, which, from a systems perspective, runs the risk of creating stand-alone solutions which do not necessarily integrate with each other.

The lack of clarity about what a TNA actually is seems to mean that all kinds of activities can fall within the definition of a TNA, ranging from gut reactions to systematic organisation-wide surveys.

Adult learning. Another thread was the common lack of understanding amongst non-training professionals as to how adults learn, leading to inappropriate solutions.

Developing baselines. The intimate relationship between a training needs analysis and an evaluation was also pointed out: how can you carry out an evaluation of the effectiveness if you have no idea what the original problem was.

An interesting exercise, in eliciting views, and one which highlights how far the training profession has to go in making organisations realise how important it is to really think about the reasons for embarking on training programmes.

For more information see: Arthur Jr, W., Bennett Jr, W., Edens, P.S. & Bell, S.T., "Effectiveness of training in organizations: a meta-analysis of design and evaluation features", Journal of Applied Psychology, American Psychological Association, 2003, 88, 234



Training - event or part of a system?

Published by in Reflections ·
In  a few days time I will click a Send button and my Master's dissertation will go off to meet its marker. That will be the culmination of 15 months of alternating periods of reflection, inaction and obsession, and I will not be sorry that it is all over. I have lost count of the number of nights where I have woken up in the dark to start worrying about some area of research I have not yet explored but definitely must. And then woken blearily to the rising sun with little clear memory of those nocturnal moments of complete clarity.

Research such as this makes you focus on increasingly narrow subjects, and it becomes very difficult to to see the bigger picture of what you are looking at. So when I came to a question in the dissertation template which asked me to include some observations about what the wider implications of the research would be for my professional practice, I was somewhat taken aback.

It took me a few days before I was able to adjust my focus and think of an appropriate answer. I realised eventually that one way in which I would now be able to look at my professional practice different was to stop looking at a workshop or an e-learning course as a single event but to always see it as part of what might be called 'a learning system'.

In an earlier post I talked about comments made by several people at OEB 2015 about the 'training as pizza' model: how long would you like the workshop to be? Too often, training is seen as the only solution which is needed to solve performance problems, and overlooks the operational context of how people do their work, by experimenting and reflecting, by asking other people for help, by discussing things which they don't understand and so on.

By thinking systemically about how learning contributes to improve performance we become able to see much more clearly the small part that single events such as a workshop play in the whole learning system, supporting the development of social learning networks, strengthening social capital and other intangible outcomes. Formal training should never be just the only answer, but should always be designed to be just a part of a systemic change process which strengthens learning and the ability to apply new skills.



Next
Copyright 2015. All rights reserved.
Back to content | Back to main menu