The systems thinking and training blog - Bryan Hopkins

Go to content

Main menu:

Limits to “Limits to growth”

Published by in Reflections ·
Those of us of a certain age with an interest in environmental issues will remember “The limits to growth”, the 1972 Pan paperback authored by Donella Meadows and others at MIT. It will also be familiar to students of systems thinking, as it was perhaps the first major use of computer technology to support a large system dynamics model.

The LTG model that the MIT team develop looked at the relationship between population, industrial output, food production, pollution, availability of non-renewable resources, birth rates, death rates and the level of services that could be provided (health and education), and made projections about how these might interact through to the year 2100. The predictions were not good. Although they ran the model with different conditions, the one that was seen to be most likely given the way the world economy ran in the 1970s was that most things would progress steadily until about 2030, at which point there would be a collapse as a shortage in natural resources made it harder to deliver services. In 1972, 2030 was literally a lifetime away (mine certainly), but now it looms large.

As George Box said, all models are wrong but some are useful, so how useful has the LTG model been? Although it was roundly criticised throughout the 1970s by neoclassical economists, several studies conducted during the last 20 years have shown an alarming accuracy in its predictions, so if nothing changes, we may be not far away from catastrophe.

But one of the more interesting aspects of LTG is little known. The report was commissioned by the Club of Rome, a group of intellectuals and professionals, on the basis of a paper written by one of its members, Hasan Ozbekhan, entitled “The predicament of mankind”. In it Ozbekhan listed 49 ‘continuous critical problems’ that he thought the world faced at that time. The paper called for the then newly emerging computer technology to explore the relationship between these problems to try and develop an understanding about how they were connected and what the implications might be. LTG was the result.

However, Ozbekhan was not happy with the result because he felt that the MIT team had focused very much on hard information that was easily quantifiable and amenable to system dynamics technology. One of the continuous critical problems that they did not take into consideration was CCP-18, “Growing irrelevance of traditional values and continuing failure to involve new value systems”. Subsequent research into the 49 CCPs using different computer analysis techniques has shown that CCP-18 is a root contributor to all of the other problems, suggesting that a failure to deal with value systems dooms everything else.

A cursory reflection on late 20th century and early 21st-century value systems seems to confirm this. By the 1980s the world was dominated by neoliberal thinking and the importance of the individual and their choice — consumption, lifestyles, the need for new things, to better oneself.

I am not sure that we have learnt much about the possibly fatal nature of our Western value systems. During 2020 at the height of the COVID-19 pandemic there was talk about ‘building back better’, and movement restrictions might have encouraged people to reflect on what was actually important in their lives and what they wanted for the future.

Will this have triggered a new enlightenment?

Using theory-based approaches to evaluate training

Published by in Evaluation ·
I was recently invited by the British Institute for Learning & Development to contribute something to their blog. I decided to write a piece about using theory-based evaluation methodologies to evaluate training, as an improvement over existing Kirkpatrick-based approaches.

Rather than repeat myself here or try to edit what I have already written, here is the link to the relevant BILD blog.

Using the OECD-DAC criteria for training evaluation

Published by in Evaluation ·
I was recently reading through the specifications for a training evaluation project and was somewhat surprised by the absence of any reference to Kirkpatrick and his famous framework. Instead, the criteria for the training were framed around those used in the development and humanitarian sectors, commonly known as the OECD-DAC criteria (from the Development Assistance Committee of the Organisation for Economic Cooperation and Development). These criteria are relevance, effectiveness, efficiency, impact and sustainability.

Although I am familiar with these criteria from my work in these sectors, it is interesting to reflect on their absence from the training world, where criteria for training effectiveness come from objectives specified at the beginning of the training design process, and are often structured around the SMART concept. Although training of trainers courses often recommend the use of SMART objectives when specifying training, I have always found the structure somewhat unsuited to training. According to Wikipedia, the first use of SMART is attributed to George T Doran in 1981, where they were put forward as a way to set goals for management activities. This isn't the place to get into the pros and cons of management by objectives, but while they may have been suitable for this purpose, in my opinion they just don't work for training.

So where I have been able to, I have always tried to use Robert Mager's three-part objectives structure: performance (the observable verb), conditions (where the performance is carried out) and criteria (measure of success for the performance). This is much more usable for designing training, but it is still very difficult to use this structure for defining some overall success criteria for a training programme. In my time I've seen many training programmes designed around objectives (both SMART and Mager-type) where it would be impossible to draw any conclusions about success, because of the complexity of real-world performance and measures of success. Often this is down to training objectives being written in such a way that suggests the aim of the training is to cause major behavioural or organisational changes. The writer of the objectives may realise that other strategies need to be implemented to support training, but this may not be explicitly stated in a comprehensive training needs analysis. But if the wording of the training objective is not realistic as far as real-world behaviour is concerned, the objective cannot be used as the basis for an evaluation.

Which brings us back to the criterion problem. I think the five OECD-DAC criteria have much to offer. Here are a few questions relevant to each of the criteria which would be useful in a training evaluation.

Relevance. How good is the match between what the training programme contains and the target audience need, in terms of content and process? Is the programme relevant to the overall vision of the organisation?

Effectiveness. Have the people learnt what they were supposed to have learnt? How well have they been able to transfer this learning to the workplace? What may have obstructed implementation of their learning?

Efficiency. What alternatives would there be for achieving the desired change in behaviour or impact? Could the training have been delivered in a more cost-effective way?

Impact. What changes have there been in behaviour and performance? How might training have contributed to this change?

Sustainability. How sustainable is the training programme (cost, logistics, etc.)? How sustainable is the change in behaviour or performance? Will things go back to normal when people forget the novelty of the training programme?

There are of course a lot of questions which could be asked under each heading. For me, one of the interesting things about using the OECD-DAC criteria is how much more systemically the evaluation process becomes. It encourages us to think about the history and process of the training and not just to focus on impact or behaviour or the narrow requirements of a SMART objective. The criteria may have been designed for a different sector, but they have a lot to offer the training world.

Designing better smile sheets: essential reading

Published by in Evaluation ·
I have just been reading a new book by Will Thalheimer called "Smile Sheets", and an excellent read it is too.

If you don't know Will, he specialises in reading academic research about learning and thinking about how this can contribute to training. Training is, of course, an area where there are various strange practices based on mythical facts. One of my favourites is the cone of experience, the claim that we remember 10% of what we read, 20% of what we hear and so on. Claims such as this are presented in training, and because they seem to make some sense get repeated, and slowly becomes fact. This particular topic is one that Will has discussed in the past, and I can recommend a visit to his blog to learn more (

Anyway, he focuses in his latest book on the smile sheet, or to give it its polite name, the reaction questionnaire (a la Kirkpatrick). Although this is the bedrock of most training evaluation activities, the book discusses in some detail the lack of research data to prove that it is meaningful in any way. This is because of a number of different factors. One is that the types of questions often included in reaction questionnaires are often poorly constructed from a statistical point of view, and force the learner into giving a positive response. Another is that surveys conducted in the training environment while the training is still under way are heavily influenced by the fact of being there and because there is no time for reflection on what the training has been about. Finally, there is very little evidence to show that merely reacting positively to a training activity means that there will be learning, which is a fundamental principle in the Kirkpatrick framework, which, of course, underlies much thinking in training evaluation.

Will then goes on to talk about what learning actually means and provides a practical guide to how to design 'smile sheets' which can actually produce meaningful and useful data. It is a most entertaining and illuminating read, and I certainly wish that I had read it before sending my own manuscripts to my publisher!

If you do get involved in any way with training evaluation, buy yourself a copy. At $25, it's well worth it.

Update on my forthcoming book

Published by in Evaluation ·
This is a slightly different entry, but is an update with information on my new book, "Learning and Performance: A Systemic Model for Analysing Needs and Evaluating Training ".

This is a practical guide for using systems thinking concepts such as boundary definitions, multiple perspectives and relationships in carrying out training needs analyses and programme evaluation.

It explains how to use techniques such as the Viable Systems Model and Soft Systems Methodology to explore areas of concern in organisational performance, in order to identify a holistic set of solutions which can improve performance. In the case of evaluating training, it uses these tools to provide a practical approach to evaluating both the learning and impact of training.

The book will be published by Routledge in late 2016 with an expected cover price of about £45. However, after some discussions with my publisher, we have decided to take pre-publication orders at the heavily discounted price of £20.00. That looks like a very good deal.

If you are interested, all you have to do to register a pre-publication order is click here to contact Jonathan at Routledge:

Why I like 70:20:10

Published by in Informal learning ·
I have just finished reading the report "70+20+10=100: The Evidence Behind The Numbers", produced by Charles Jennings and Towards Maturity. A most interesting and worthwhile read.

For clarity, the 70:20:10 model refers to an observation that 70% of what people learn comes from real life and on-the-job experience, 20% from working with other people and 10% from formal training. These figures came from observations on leadership in a largely male target group, and, as the report acknowledges, different figures have been derived for female groups. Other research, not referred to in the report, discusses how 80% of what people learn comes from 'informal' means.

So there is some disagreement about the numbers, but this is largely because of the difficulties of actually defining what these categories of learning mean: what is the difference between 'on-the-job experience' and 'working with others'. What exactly is 'informal' learning?

But really, the numbers are not important. Why 70:20:10 is such a useful concept is that it provides a simple model around which people can conceptualise the importance of integrating formal and informal learning, which is something which the training industry has struggled with for many years. As the report says, its value is in helping people to realise that learning is a complex, multi-faceted activity, but that taking steps to facilitate non-formal learning opportunities and integrating them with formal training can bring rich rewards to organisations.

As I read the report I felt a strong sense of vindication that my own systems-based approach to analysing performance issues and developing training strategies is completely justified. Using a systems approach automatically means that we develop an understanding of each part of the 70:20:10 triad, about the dynamics of the workplace, how people work with each other and share information, what barriers and enablers may exist to implementing new knowledge and skills and so on. This can help us to design training which explicitly helps people to integrate their informal learning opportunities with training. Definitely a good thing!

The importance of boundary spanners

Published by in Reflections ·
I have not been able to write any blogs recently because all of my time has been taken up with finalising the manuscript for my new book. Provisionally entitled "Performance and evaluation: a systems thinking approach", I eventually managed to get everything packaged up sent off to Routledge, my publishers, by the end of last week and so this should hopefully now be published towards the end of 2016.

It has taken about two years to put the book together, pulling together experiences from work that I have done and my ongoing studies with the Open University. Taking that amount of time has some advantages but also means that what you learn over time makes the content of sections written early on seem somewhat inadequate. There is also the problem of changing writing styles as time goes by: what seemed a good way to do it in 2014 was not by 2016. These inconsistencies were exposed by the review process. I asked a number of trusted associates to look at the draft manuscript and their comments were invaluable, pointing out how the logic of the structure could be improved and identifying weaknesses in words, sentences and paragraphs.

It made me realise that a fundamental problem with the book was bringing together two academic disciplines, the training world and the systems thinking world. A core theme within the book is about learning as a networked activity, of connections of people in groups. One of the topics I discussed within the book is that of Social Network Analysis, a way of quantifying how people work together. Some early research within this field was by Mark Granovetter, whose article "The strength of weak ties" looked at how working-class people relied on social networks to find information about job opportunities. He saw that people have both strong ties, those with friends and family, and weak ties with friends of friends and infrequent contacts. His research showed that the people who were most useful for identifying work opportunities were actually the weak ties, because these people were connected with other networks. They functioned as 'boundary spanners', a term coined by Michael Tushman to describe people who form connections between networks, and play an important part in helping information and ideas to move one network to another.

Receiving feedback from both training professionals and systems people made me realise that my book will be operating as a boundary spanner, that it tries to communicate training ideas to system professionals and vice versa. Really, the target for the book is training professionals so the priority will be for them to develop an understanding of how systems thinking ideas can be useful.

It remains to be seen how useful as a boundary spanner my book will become!

Evaluating the unintended or unexpected

Published by in Evaluation ·
Last week I was asked for some advice about questions to ask when carrying out a Level 3 evaluation (in Kirkpatrick-speak). Conventional wisdom is that this is about behaviour change, whether or not people change their behaviour in line with the objectives of the training. And of course, when we are trying to do a Level 3 evaluation we are also interested in what impact any changes of behaviour have at a team or organisational level (the fabled Level 4).

All well and good, but one of the possible weaknesses in traditional approaches to training evaluations which becomes apparent when using a systemic perspective is that because they start with the reference point of the learning objectives, evaluation against the learning objectives can end up being the only thing that is done, when, of course, training can have other consequences.

There are basically two sorts of other consequences. The first is a Level 3 change, what other behaviours have changed as a result of the training? For example, has there been any change in the way people interact with each other, perhaps talking about the training course, questioning how useful it has been, what implications it has for everyday practice? Has this improved what people do, in the subject of the training or in other areas? This is all about different aspects of knowledge management, about the impact of the formal training on informal learning practices.

The second is at Level 4, what are the outcomes at the team or organisational level and what impact does this have? If we don't think about what changes the training makes on a team, then it becomes harder to look at impact that it might then enable. For example, people who attend a training course may get to know each other better, may develop trust and therefore become better team players in the future: this is an example of improving 'social capital', a somewhat nebulous but nevertheless very important factor in improving organisational effectiveness, but which may only become apparent when there is a lack of it! Of course, the implications of this may be much longer term than a straightforward observable output, such as level of sales or of widgets manufactured, and it may also be harder to measure, but this does not mean that it is any less important. If we do not recognise it as important it is unlikely that we will even try to measure it.

For example, if training does lead to an increase in levels of widgets produced over a six-month period but leads to increased workforce dissatisfaction with potential longer-term implications, what is its true value?

Learning styles: serious tool or parlour game?

Published by in Training design ·
I have recently been involved in looking at several different training of trainers events. Although the events have been for different target groups and in different sectors, in all cases some time in each course was spent on analysing (and subsequently referring back to) learning styles of different types, in particular those based around Kolb's experiential learning cycle.

Now, I've often done similar activities in my own training, and know that participants seem to find this kind of self-analysis quite fun and interesting ...  but is it just a bit of fun or is it really of significance?

I've started to ask this question more since I have been looking at training and learning from a systems thinking perspective. Every system has to have its own environment with which it has some sort of relationship, and this relationship influences the functioning of the system in some way. What does this mean if 'learning' is the system?

Thinking particularly about Kolb, his work comes from a humanistic psychology perspective, which means that he considers how a whole being behaves, and does not consider how that behaviour has come to be. This contrasts with more psychoanalytical approaches which seek to understand how a person's history (i.e. their environment) has affected their behaviour. So his cycle of experiential learning describes how some free-floating individual makes sense of new information, which is fine as we, when using the idea, can consider how what is going on around the individual might influence how it works.

However, quite a few writers have suggested that when we take Kolb's ideas further by saying that individuals have a preference for one or two of the stages in the learning cycle, that the humanistic approach creates a problem by ignoring the effects of the environment. Learning styles questionnaires work by asking people to reflect on how they learn in different situations, and they then receive some sort of summary as having one or two 'preferred' learning styles. Their contention is that this analysis is only valid for the situations considered in the questionnaire and at that moment in time, so for different situations or at another time the individual might respond quite differently. Which means that there may be no such thing as a person's always preferred learning style, only a preference at a given moment. Which makes the questionnaire a bit pointless ...

For me I know that I approach new learning situations differently depending on various factors, such as what the situation is, how familiar I am with it as a general class, how much time I have, how well I need to be able to respond, and so on.

So I'm left feeling that learning styles might be a bit of fun to talk about, but that pinning "Activist" or "Reflector" badges on people might be at best a bit of a waste of time, or worse, misleading and perpetuating one of training's great myths.

Training needs analyses: do they exist?

Published by in Reflections ·
When I was doing the research for my upcoming book on training needs analysis and systems thinking I came across an article in the Journal of Applied Psychology (see reference below) which summarised a meta-analysis looking at what factors seem to influence the success of training programmes. One statistic which caught my eye was that their data suggested that only 6% of training programmes were based on a training needs analysis. 6%, not many!

The authors of the study did point out that it was often not clear what a 'training needs analysis' constituted, and that their research looked at published studies, so it was possible that in the 'real world' organisations were indeed carrying out needs analysis activities. So, to try and get some different perspectives on this I asked the question in one of my LinkedIn groups: "Training needs analyses: do they exist?".

Very quickly the question attracted over 100 comments from many different people, and they are still coming, so clearly the question was of interest, and in general showed a lot of frustration with the current situation within organisations as far as conducting needs analyses is concerned.

With so many comments, it is difficult to control specific conclusions, but there were a number of common threads which appeared during the course of the conversation.

Essential but not happening. Many people agreed with my initial proposition that while TNAs are universally said to be essential, they are often not carried out in any significant way.

TNAs take too much time. Organisations want quick results and running a training course is a quick solution (although of course it does not guarantee quick results, which many people pointed out).

What is a TNA? Quite a few people discussed the difference between a training analysis and a performance analysis, seeing the performance analysis as something which came first, to identify what factors are affecting performance, followed up by the training analysis to decide how training can contribute. Interestingly, several of these comments mirrored what I have seen in the standard TNA literature, that these are sequential events, which, from a systems perspective, runs the risk of creating stand-alone solutions which do not necessarily integrate with each other.

The lack of clarity about what a TNA actually is seems to mean that all kinds of activities can fall within the definition of a TNA, ranging from gut reactions to systematic organisation-wide surveys.

Adult learning. Another thread was the common lack of understanding amongst non-training professionals as to how adults learn, leading to inappropriate solutions.

Developing baselines. The intimate relationship between a training needs analysis and an evaluation was also pointed out: how can you carry out an evaluation of the effectiveness if you have no idea what the original problem was.

An interesting exercise, in eliciting views, and one which highlights how far the training profession has to go in making organisations realise how important it is to really think about the reasons for embarking on training programmes.

For more information see: Arthur Jr, W., Bennett Jr, W., Edens, P.S. & Bell, S.T., "Effectiveness of training in organizations: a meta-analysis of design and evaluation features", Journal of Applied Psychology, American Psychological Association, 2003, 88, 234

Copyright 2015. All rights reserved.
Back to content | Back to main menu