The systems thinking and training blog - Bryan Hopkins

Go to content

Main menu:

Why training is never a solution to a workplace performance issue (but then, nothing else is either)

Published by in Training design ·
\n\n
This blog item was originally published as a LinkedIn article.

\n\n
Early in my schooling I was presented with the problem “2 +\n2 = ?”. With the aid of various fingers, I solved that one, and in due course went\non to complete an engineering degree, where I solved some much more complicated\nproblems than that. Had I continued in engineering, I might have contributed to\nthe mathematics which lands spaceships on Mars: even more complicated, but\ngiven equations, speeds and trajectories we can confidently work out how to get\nthis job done. It is just rocket science, after all, with clear processes to\nfollow … and solutions.

It is a bit different on the days when I am planning what to\ndo when I look after my two-year old grandson. I have an idea about what we will do and know what he is allowed to\neat and not eat. But he has his own\nideas, and what actually happens on those days emerges out of the interactions\nof these different perspectives. Our relationship is beyond complicated: it is\ncomplex, a heady, undefinable and unpredictable mix of human behaviours. Quadratic\nequations and Laplace transforms do not help, and there are no solutions giving\na plan for a perfect day.

This will not be new to many readers. But, actually what we\ndo when we design training programmes is to pretend that human behaviour is predictable\nand treat the whole issue of performance improvement as if it were rocket\nscience. We do this because we have been seduced by the charms of the\nEnlightenment, that period in history when rational thought started to replace\nmysticism. It was thought that we could understand anything by breaking it down\ninto its constituent parts, seeing what each part did and adding it all back\ntogether. This does work well for rockets, but not for my grandson and I, nor\nfor people working in organisations.

The starting point for training design is to work out what\nwe would like people to be doing, define performance objectives and then\nexplicitly or implicitly, deconstruct these to identify the specific aspects of\nknowledge, skills and attitudes that are needed. We then have the bones of the\ntraining programme. This might be a good way to start the process of designing something to improve performance,\nbut it has serious weaknesses if we start to use these same objectives to make\njudgements about how effective the programme is after it has been implemented.\nAfter all, as soon we start training people, these simple pieces of knowledge,\nskill and attitude interact with human behaviour issues and start to take\neveryone involved in directions we may not expect.

Let’s think about these behaviour issues more closely.\nPeople interact with each other, interactions have consequences and create\nfeedback loops, information comes in from outside the group, and there is a\nhistory which has moulded the group into what it is at any particular moment.\nAs such, workplace groups can be regarded as complex adaptive systems, systems which are constantly changing in\nresponse to internal and external dynamics. Of particular importance is the\nreality that human interactions are what is described as non-linear, that there is no direct, consistent connection between\ncause and effect. Of significance here is that this means that when we train\nsomeone to do something better they may not actually do it better, or doing it\nbetter may cause negative feedback within the system (resentments, jealousies,\ninfringing implicit performance norms, leaving the organisation and so on).
\n\n
We also know that when we look at problems in the workplace\nwe can find it very difficult to describe exactly what the problem is: everyone\nwill describe it in different ways, depending on their own view of what is\nhappening. Because explanations of the problem are different, definitions of\nsuccess will be different. Anything we do to change things within a problem\nsituation changes the conditions, so the nature of the problem changes. We also\nfind that the problems we are exploring are actually to some degree caused by\nother problems. So, as we saw before, because everything is connected we have a\nnetwork of complex adaptive systems, all constantly evolving to generate\nsituations which we cannot possibly predict in advance.

Given this complete mess how do we start to make things\nbetter? The key is to try and stop thinking of finding ‘solutions’. Complex,\nwicked problems[1] never\ncome to an end, they just keep changing, and all we can do is to try and make\nthings better: we will never be able to ‘solve’ them. This has big implications\nfor training design.

Firstly, training programmes are usually based around sets\nof static performance objectives or learning outcomes, defined at a specific\npoint in time. But by the time a programme has been designed the problem is\ndifferent, so the objectives may have become irrelevant. We should therefore\nthink more about trends: is the situation developing in a desirable direction?\nThis also means that instead of an evaluation carried out some time after the\nevent we need to do more ongoing monitoring. This helps to get around the\nproblem of deciding when to carry out an evaluation: this is always difficult, too\nsoon and any initial enthusiasm colours the results, and too late, causality\nbecomes far too indistinct to give evaluation any meaning.

Objectives are usually expressed in the form “The learner\nwill be able to: …” This focuses training on individuals and overlooks the fact\nthat everyone works within a complex adaptive system. It means that the content\nof training tends to focus on individual knowledge and skills rather than\ncollaborative or cooperative activities. Training initiatives should be more\nteam-oriented, involving staff and supervisors, along with other teams with\nwhich they interact. Objectives should focus on positive change rather than\nbeing about achieving an end state.

Thirdly, the constantly changing landscape within complex\nadaptive systems means that top-down didactic training can never hope to give\npeople the knowledge and skill they need to be able to deal with all the\nevolving operational variety they face. So performance improvement strategies\nmust create structures and space where people can exchange information and\nlearn from each other.

So training can be a\nsort of solution, as long as we do not see it as providing a definitive result.\nSolution-oriented thinking also tends to create responses which are structured\nas projects, i.e., with a beginning, middle and an end. If we escape from that\nparticular thinking box, we can conceive more easily of learning interventions\nwhich are ongoing strategies, constantly adapting and being adapted to help\npeople continue to move in a desired direction.
\n\n

\n\n

\n\n
\n\n\n\n
\n\n
[1]\nThe term ‘wicked problem’ was coined in the article “Dilemmas in a General Theory of Planning”, Rittel, H.W. & Webber,\nM.M., 1973.. Policy Sciences, 4(2), pp. 155–169.
\n\n
\n\n
\n\n
\n\n



How cybernetics can help improve the quality of training programmes

Published by in Training design ·
 
This posting was originally published as a LinkedIn article (https://bit.ly/2kFr8W1)

Cybernetics. A word which evokes thoughts of robots, of Dr Who’s cybermen, or of ‘transhumanists’, people who are looking to improve human performance by integrating technology into their bodies. But that is only one aspect of cybernetics, and one which does not readily suggest how cybernetics can contribute to learning.

The Oxford English Dictionary defines cybernetics as “the science of communications and automatic control systems in both machines and living things”. Thinking about the ‘living things’ part of this definition, cybernetics therefore looks at how organisms interact with their environment, exchanging information and materials in feedback mechanisms which, if functioning correctly, ensure the organism’s survival.

Of course, organisations are organisms, being composed of living, human beings. Cybernetic principles have therefore been used to analyse organisational behaviour, and one thread of thinking, sometimes called organisational cybernetics, is of interest to us here.
 
Within this perspective, each individual worker interacts with their operational environment, exchanging information and other resources. By extrapolation, so does the overall organisation (of course, in a one-person organisation, the individual is the organisation!), and it is therefore reasonable to assume that we can apply principles of cybernetics to how individuals and their parent organisations operate. Each person’s ‘environment’ includes both external entities (clients, suppliers and so on) and internal entities (colleagues, other departments and so on). We therefore have potentially a complex set of interacting feedback loops, which can make it somewhat difficult to understand what is happening.

However, there exists a very powerful tool called the Viable System Model (or VSM) which can help us to make sense of things. VSM is based around the interrelationship of five distinct but interconnected systems of information and resource exchange. Within the VSM literature, these are typically shown in a diagram like the one below.

 
                                               
 
The key concept in VSM is viability, of being able to survive successfully in the face of whatever variety exists in the environment. Essentially, the organisation must be able to show enough variety in its own behaviour to match the variety it has to deal with. To explain this with an example, if we are looking at a healthcare organisation working with an environment of people who are old, young or have disabilities, its internal organisation must be structured so that it can look after people who are old, young or who have a disability. This may seem blindingly obvious, but it is all too common for training programmes to be limited in scope and inflexible of message, making it harder for people to learn how to work flexibly and function as a viable system. It is also very important to remember that environments are constantly changing, so each worker’s capacity for dealing with variety (and the training required to enable this) must also be changing.

This VSM diagram showing how an organisation operates looks completely different to the classic organisation chart, structured by function. But it has a major advantage in that it shows how the organisation works (or should work), whereas the organisation chart simply shows a structure, and says nothing about interactions or operation. This is because it is derived from a hierarchical, bureaucratic mindset, and goes a long way to explaining why people often complain about “working in silos”: if that is how we think about an organisation’s structure, then that is the way we behave.

So briefly, how does this VSM diagram work?
  • The various System 1s are the operational (or implementation) activities, what delivers value to customers or clients, such as sales, procurement, fulfilment and so on. Every individual System 1 must be viable, in that it can respond appropriately to changes in its environment.
  • System 2 is coordination between the operational activities, making sure that, for example, increased sales activity is matched by an increase in procurement of raw materials or other resources.
  • System 3 is the delivery (or control) function, making sure that the different System 1s and System 2 all have the resources that they need. It actually works in two directions, and what is often called System 3* is a monitoring function, where each System 1 and 2 reports back so that System 3 provides what is needed.
  • System 4 takes information from both the internal and external environment and makes sure that the organisation remains in tune with what its customers and clients want, passing this information on to Systems 3 and 5.
  • System 5 sets the policy for the whole organisation, making sure that organisational activity remains in line with its vision and goals and is appropriate for the environment.

Crucially, this structure is recursive, and we should be able to see this structure within each different System 1 throughout the organisation. So we could look at the sales function and break this down into a number of separate System 1s and corresponding Systems 2 to 5. We see then that, for example, at every level of analysis the organisation should be taking appropriate information from its environment and feeding this into what it does.

If we use a VSM approach to look at how training is designed and delivered, we can identify principles which will make sure that training promotes viability.

Firstly, there is a major distinction between System 1, the operational activities, and the other four systems, which broadly represent what we would call ‘management’. Training for Systems 2 to 5 is often subsumed in what we call ‘management development’, so it is interesting to think about how traditional management development activities deal with cybernetically desirable activities. A key observation here is that traditional approaches to management development are often based around the hierarchical, bureaucratic model of organisations, with an emphasis on up and down relationships: for example, leadership, delegation, accountability and so on. Less importance may be attached to coordination and collaboration, monitoring or environmental awareness.

Operational training (System 1) needs to make sure that people can deal with all of the variety that they experience in every day, working life (being viable). This means that training should be learner-centred, practical and problem-based. This is well known empirically, being a core part of andragogical, adult learning principles, but here we can see how it is a requirement from cybernetic first principles.

Training designers should also recognise what relationships there are between different primary functions and make sure that these are incorporated into the training (System 2). Training programmes which focus on strengthening a System 1 without taking into account its dynamic relationship with other operational systems can cause more problems than they solve. This may mean that the scope of training needs to be widened, with related training or information being provided for people in other functions. Existing protocols and standard operating procedures may need to be revised to reflect different patterns in primary functions. There is a particular role here for informal learning, with people being encouraged to exchange information within and across teams so that coordination improves. ‘Training’ often ignores the need to promote informal learning, but it is crucial if the overall organisation is to be viable.

Training itself is an example of a System 3 activity (provision of necessary knowledge and skills). However, the VSM shows that what this provision should be needs to be based on information provided by System 3* (internal) and System 4 (external), which is, of course, what a training needs analysis (TNA) should do. Of course, this process may show that there are weaknesses in other System 3 or 3* activities. If there are System 3* weaknesses, reporting systems may need to be strengthened (while not becoming disproportionate or onerous): this would subsequently form an important source of information for training evaluations.

Training should make sure that people have the skills and tools needed to gather information from relevant parts of their environment, about what the environment needs and how it is changing (System 4). They should also be able to use this information appropriately. Training management should also be constantly monitoring the environment to make sure that training remains appropriate to what will be constantly changing patterns of variety: TNAs should be ongoing.

Finally, training should always be related to the broader aims of the organisation or department (System 5). This means that people working in a System 5 role should make sure that TNAs are taking place and that what they recommend is consistent with strengthening overall viability.

Too often training carried out in organisations is not planned from a systemic perspective. Training needs analyses may be perfunctory, with little thought being given to the complex web of decisions and interactions which contribute to effective performance. Training programmes are often reductionist, focusing on one small area of knowledge, skills and attitudes which seem to be appropriate to that particular silo of activity. Thinking about training for a cybernetic perspective can help to avoid this, making sure that training being delivered is closely integrated with all aspects of organisational activity so that the organisation continues to be viable in relation to its environment.



Management development in the humanitarian and development sectors: a postscript

Published by in Reflections ·
I recently wrote a blog (and article on LinkedIn) asking the question as to whether conventional forms of management education are relevant for the humanitarian and development sector.

Coincidentally, a few days later I read an article published as part of The Guardian's Long Read programme by Martin Parker arguing that business schools should be bulldozed! The article is well worth a read, but I think it would just be worth summarising some of Martin's comments that are relevant to my own writing.

He points out that there is an overall assumption that "market managerial forms of social order or desirable", and that "capitalism is assumed to be the end of history".

Secondly, it is based on an assumption that humans behave as rational egoists, so techniques for managing them are based on that assumption.

Thirdly, business schools are "places that teach people how to get money out of the pockets of ordinary people and keep it for themselves".

So I think you can see that there are parallels between Martin's (much more informed) argument and my own.



On unicorns and training needs analyses

Published by in Training design ·
 
(This post was originally published as a LinkedIn article)

For centuries people were fascinated by the thought of finding a unicorn. They had many qualities: they could make poisoned water drinkable and they could heal sickness. Unfortunately, they do not exist.

I think the same about training needs analyses. They are also wondrous things: they identify effective learning interventions, they explain how new knowledge and skills will overcome great obstacles, they provide a baseline for post-delivery evaluations, and so on. The problem is, that like unicorns, they also do not seem to exist.

Well, maybe that’s being a bit dramatic. There have been some training needs analyses identified here and there. But, really, not that many. On what evidence do I make this assertion?

First, anecdotal. People I talk to in the training world say, almost without exception, that proper training needs analyses just do not take place that often. I also asked a question in the Learning, Education and Training Professionals group on LinkedIn whether training needs analyses happened, and most replies said that they were an exception rather than the norm.

Secondly, more empirical. Whenever I start a training evaluation project, my first question is about the needs analysis on which the training is based: to date, I have never seen a training needs analysis report. Instead, people explain that it was based on a decision made by someone a while back (who has often left the company) or that the decision-making involved was not documented.

Thirdly, and more rigorous, a 2003 meta-study of training design activities by Arthur et al, noted that, based on the information they were able to review, fewer than 10% of training projects were based on a thorough training needs analysis.

So given that they have wondrous qualities, why do we just not see many proper analyses done of workplace situations, which lead, logically and with justification, to effective training interventions? There are a number of possible reasons.

A thorough training needs analysis will be a fairly complex undertaking, requiring the analyst to explore and develop an understanding of the various wicked problems which are contributing to a perceived performance problem. This will therefore take time, and training managers are often under considerable pressure to deliver quickly.

 
Training professionals may simply not have the breadth of skills needed to be able to understand reasons for performance problems. These problems may be due to poor organisational design, group psychology issues in the workplace, ergonomic design weaknesses or just a lack of understanding of the environmental variety that staff have to deal with when they are carrying out their jobs.
 
It may even be unclear as to what the actual problem is. One person may say that it is due to inadequate knowledge, another person due to weaknesses in operating procedures and so on.

 
There is a significant lack of integrated and comprehensive tools for training needs analysis. Ideally such an analysis should take place at three separate levels: organisational, to understand the wider picture affecting performance; operational, to look at what is happening at the functional level; and personal, to understand who the affected staff are and how training can best be integrated into their working lives (Goldstein and Ford, 2001). There are, it should be noted, various tools available for helping with these levels of analysis, but they are probably not as widely known about or used as much as they should be. For example, there is Gilbert’s Behaviour Engineering Model and Mager and Pipe’s performance flowchart for functional analysis and Holton’s Learning Transfer System Inventory for the personal level of analysis.

 
Finally, there is the assumption that, whatever the problem is, training will provide a solution. Paradoxically, this is seen to be a valid assumption at the inception of a project, leading to a decision to implement training without thoroughly analysing the problem, but possibly not valid after delivery, when there is a demand for an evaluation and perhaps an assessment of return on investment.
 
Detailed guidance on how to carry out a thorough training needs analysis is beyond the scope of a short article like this, but I have two suggestions.

 
Firstly, involve in the analysis process the people who will receive any training. One of the less well-implemented aspects of Malcolm Knowles’ work on andragogy (1977) is that planning of any adult learning activity should be done in participation with the learner. The learner knows what operational variety they have to deal with and what gets in the way of satisfactory performance. They will also understand how informal learning channels operate, so that formal training can be designed to integrate with this. All too often the structure and content of training is decided by senior managers who feel that they know what people must know, leading to training which is content-based and trainer-centred.

Secondly, systems thinking provides a set of tools which offer an integrated approach to training needs analysis. Techniques such as the Viable Systems Model and Soft Systems Methodology make it possible to identify and engage with all performance problem stakeholders in a way which can lead to a more holistic solution.

To carry out a training needs analysis properly, the training professional has to overcome quite a few hurdles. But if it is done properly, it can have many benefits. There is a greater confidence that any training solution will have positive benefits, because it will have been designed with the right content and using appropriate delivery modalities. There should be a better return on investment (if that can actually be measured) because the right tool is being used for the right job. And other non-training interventions should have been identified, which remove obstacles to improve performance and support the successful implementation of new knowledge and skills.

So let’s put an end to the mythical nature of training needs analyses, and try to make them a reality of organisational life.

 
References
 
Arthur Jr, W. et al., 2003. Effectiveness of Training In Organizations: A Meta-Analysis of Design and Evaluation Features. Journal of Applied Psychology, 88(2), p.242.
 
Goldstein & Ford, J.K., 2001. Training in Organizations (4th Edition), Cengage Learning.
 
Knowles, M., 1977. Adult Learning Processes: Pedagogy and Andragogy. Religious Education, 72(2), pp.202–211.




Management development in the humanitarian and development sectors: a cause for concern?

Published by in Reflections ·
(This post was originally published as a LinkedIn article)

I came across a sentence the other day which said something like an idea becomes an ideology when we forget that it is an idea. It struck me that it is a bit like a fish swimming in water, having no idea what water is and not realising that there is any alternative way of living.

This idea of there being no alternative to how things are came to me a while ago when I was doing an evaluation of a management development programme for a humanitarian organisation. My evaluation methodology draws heavily on various critical systems thinking tools, and one of the questions which comes out of this is about where the knowledge for a training programme comes from and what credentials these sources have.

So while the terms of reference just asked me to investigate what impact the programme was having within the organisation, my systems thinking sensibilities made me want to probe a little bit more deeply. After all, if people undertaking a training programme are receiving what they considered to be inappropriate knowledge and skill, then their enthusiasm for applying these new ideas will probably be somewhat diminished.

The programme I was evaluating contained modules looking at what are familiar subjects in management training courses: delegation, time management, leadership and the rest. The technical content had been provided by an American business school. I was to do some benchmarking, and so talked to people involved in management training in other humanitarian organisations and discovered a similar picture: content was being provided by institutions such as Harvard, Stanford and the like.

Prestigious institutions indeed, and certainly with good credentials for management training, but where does this thinking come from? As a humble British training consultant I have very little insight into how those ivory towers operate, but I am going to posit that their ideas about effective management come from studies of private sector, corporate culture, organisations working in a profit-driven competitive world, where effective management is geared towards efficient operation of the organisation rather than solving complex political, social and economic issues in dysfunctional or struggling societies, as are typically found in humanitarian and development contexts.

Max Weber, writing in the 1950s, provided a useful idea for thinking about management in organisations. He talked about bureaucracy as being such an efficient way of running an organisation that it created an ‘iron cage’ which led to an irreversible momentum for bureaucratisation. Bureaucracies work by creating clearly-defined structures and roles which determine in a top-down manner what is done. As a result, most organisations tend to work in similar ways. This idea was taken up and developed further by DiMaggio and Powell (1983) who explored the idea of organisational isomorphism, where entities operating within the same environment become increasingly similar in the way they work.

This process works in three ways. Coercive isomorphism results from pressures to conform due to dependency relationships. Mimetic isomorphism happens because when an organisation works in an unstable environment or its goals are ambiguous, it seeks some degree of clarity by mimicking other organisations’ ways of working. Normative isomorphism happens when professionalisation of the workforce leads to a narrowing of ideas within the workforce: people study MBAs at institutions teaching essentially the same ideas and then disperse to work in private or public sector organisations, and some of course, in the humanitarian and development sectors. Anyone who works in these sectors will surely recognise these three processes at work.

As DiMaggio and Powell (p.153) comment, isomorphism means that people “…view problems in a similar fashion, see the same policies, procedures and structures as normatively sanctioned and legitimated, and approach decisions in much the same way.”
There are justifications for this isomorphism: "It helps to have common procedures and all to be thinking in the same way", "We need to be more business-like". But we need to examine these rationalisations more carefully. Common procedures may be of some benefit but if people are all thinking in the same way, how innovative can they be, how can they expect to deal with the infinite variety of the operational environment? And what does being more 'business-like' really mean: making a greater profit, squeezing out other operational agencies?

So what are these perspectives? What is the rationale behind commonly-accepted management practices? The great systems thinker Stafford Beer said that the purpose of a system is what it does. So what do management systems do? Standard management practices have come largely out of seeking ways to help profit-seeking enterprises operate more effectively, hence their purpose is to help the organisation to survive. What it makes or sells is, in this analysis, irrelevant; survival is the primary goal. And actually, as Joseph Stiglitz has shown (2016), successful profit-seeking tends to lead to market domination and becoming a monopoly provider.
But is this what humanitarian and development agencies should be seeking to do? From my perspective they should actually be trying to promote (or stabilise) positive social and economic environments so that they work themselves out of a job, in other words become irrelevant and disappear. Instead, behaving like profit-seekers means that they prioritise survival, as shown by the recent cover-ups of exploitation and abuse stories in the sector.

So, I would suggest, slavishly following orthodox management training programmes as designed for the corporate sector carries many risks. This need to conform, to fit in with how things are done in the monopoly-power seeking private sector, makes it extremely difficult to really embrace such initiatives as Accountability to Affected Populations which rely on an inversion of power relationships.

In biological communities a small gene pool creates the risk of inbreeding, generations which lack the genetic diversity to evolve and respond effectively to environmental changes. And yet this is what we seem to be doing with management ideas, creating new generations of managers who do not have the intellectual diversity to respond to the increasing complexity of humanitarian and development realities.

I am not sure that I personally have the imagination or wisdom to come up with new paradigms for management practice. However, what I think we who work in these sectors should at least be doing is to encourage managers to critically reflect on the ideas they read in management texts and learn about in management development courses. Are these ideas really relevant for me? Do they really help me to cope with the variety of my everyday, operational life, trying to manage a refugee flow or establish an educational system in a low income country? Are there better ways we can do things?

At least to make sure we realise that they are just ideas, not established fact.

References
DiMaggio, P. and Powell, W. (1983), “The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields”, American Sociological Review, Vol. 48, No. 2 (Apr., 1983), pp. 147-160.
Stiglitz, J. (2016), “Are markets efficient, or do they tend towards monopoly? The verdict is in”, World Economic Forum, https://www.weforum.org/agenda/2016/05/joseph-stiglitz-are-markets-efficient-or-do-they-tend-towards-monopoly-the-verdict-is-in/, accessed 20 April, 2018.



Drawing boundaries around training content

Published by in Training design ·
The only face-to-face training I do these days is in so-called 'train the trainer' workshops, where I look to improve participants' skills in designing and delivering training.

The people in these workshops are always experts in their own particular subjects (not training), and this expertise can range from security in high-risk environments to knowledge about drought-resistant seed varieties. The common denominator amongst all of these as regards training is that, when asked to deliver a training programme, they usually proceed to try and transfer all of their knowledge to the learner.

In my courses I always cover some of the key theories about cognition, Kolb, Vygotsky and, of course, Malcolm Knowles. Knowles introduced Western thinking to the concept of andragogy, adult learning. His initial, 1977, paper on adult learning compared pedagogical and andragogical approaches, in the process outlining a number of key principles to follow in adult learning. The one which is often of interest to my participants is about planning: Knowles says that in a pedagogical approach the planning is "primarily by teacher", whereas in an andragogical approach planning is participative.

This always comes as something of a shock to subject matter experts. How can people who know nothing about a subject participate in planning? My answer is then to draw people's attention to the idea of learner-centred outcomes, what do we want people to be able to do at the end of a training session? So we then spend some time talking about Bloom's taxonomy, observable actions, three-part training objectives and so on.

And this always seems to be a real light bulb part of the course for people. Having followed a learner-centred paradigm in my practice for quite a few decades, I tend to forget how revolutionary an idea this can be.

But it is very powerful. If we think about what the learner's outcomes need to be, then we can draw boundaries around what knowledge and skills need to be transferred. Rather than everything.



70:20:10 - helpful myth or not?

Published by in Informal learning ·
I have just been reading a thought-provoking article by Toby Harris regarding 70:20:10 (and other learning and development myths), and thought that this made a lot of sense.

There is something in the idea of 70:20:10, and it has certainly captured the imagination of many organisations and raised the profile of informal learning. However, as Toby says, following the concept blindly is not helpful.

As I also mentioned in a previous blog, it is just not possible to divide up the ways in which people learn into the categories which 70:20:10 identifies. The original idea for 70:20:10 came from observations regarding leadership in American corporations made by Lombardo and Eichinger in 2000. As far as I have been able to ascertain, this was not peer-reviewed in academic literature, and so must always be a little suspect. There has then been this extrapolation from the leadership context to learning in general, which is also problematic.

What is perhaps more reliable is research carried out by the US Bureau of Labor Statistics (Frazis et al, 1998) which suggested that 80% of what people learn is done informally. Broadly in line with 70:20:10, but this research did state that the distinction between formal and informal is very difficult to define.

Toby also points out the contradiction of organisations implementing 70:20:10; of course, as soon as you institute informal learning it is no longer informal. Merely setting up social learning platforms to facilitate informal learning has also been shown to be very problematic. Again, much peer-reviewed research has shown how difficult it is to set up a sustainable informal learning network (or community of practice), particularly in organisations where the culture of sharing information is limited.

True informal learning starts at an operational level and is probably largely through large numbers of small conversations. This then needs to filter back up through the organisation so that it can influence messages coming back down the organisation through the formal training process. This is all in line with what complexity theory and the edge of chaos concept suggest as a way of meeting the infinite variety of an operational environment.

The challenge, therefore, is for organisational culture to change to allow learning to move up through the levels rather than be primarily a downward process. If 'implementing 70:20:10' can achieve this, then rock on!



Using theory-based approaches to evaluate training

Published by in Evaluation ·
I was recently invited by the British Institute for Learning & Development to contribute something to their blog. I decided to write a piece about using theory-based evaluation methodologies to evaluate training, as an improvement over existing Kirkpatrick-based approaches.

Rather than repeat myself here or try to edit what I have already written, here is the link to the relevant BILD blog.



Using the OECD-DAC criteria for training evaluation

Published by in Evaluation ·
I was recently reading through the specifications for a training evaluation project and was somewhat surprised by the absence of any reference to Kirkpatrick and his famous framework. Instead, the criteria for the training were framed around those used in the development and humanitarian sectors, commonly known as the OECD-DAC criteria (from the Development Assistance Committee of the Organisation for Economic Cooperation and Development). These criteria are relevance, effectiveness, efficiency, impact and sustainability.

Although I am familiar with these criteria from my work in these sectors, it is interesting to reflect on their absence from the training world, where criteria for training effectiveness come from objectives specified at the beginning of the training design process, and are often structured around the SMART concept. Although training of trainers courses often recommend the use of SMART objectives when specifying training, I have always found the structure somewhat unsuited to training. According to Wikipedia, the first use of SMART is attributed to George T Doran in 1981, where they were put forward as a way to set goals for management activities. This isn't the place to get into the pros and cons of management by objectives, but while they may have been suitable for this purpose, in my opinion they just don't work for training.

So where I have been able to, I have always tried to use Robert Mager's three-part objectives structure: performance (the observable verb), conditions (where the performance is carried out) and criteria (measure of success for the performance). This is much more usable for designing training, but it is still very difficult to use this structure for defining some overall success criteria for a training programme. In my time I've seen many training programmes designed around objectives (both SMART and Mager-type) where it would be impossible to draw any conclusions about success, because of the complexity of real-world performance and measures of success. Often this is down to training objectives being written in such a way that suggests the aim of the training is to cause major behavioural or organisational changes. The writer of the objectives may realise that other strategies need to be implemented to support training, but this may not be explicitly stated in a comprehensive training needs analysis. But if the wording of the training objective is not realistic as far as real-world behaviour is concerned, the objective cannot be used as the basis for an evaluation.

Which brings us back to the criterion problem. I think the five OECD-DAC criteria have much to offer. Here are a few questions relevant to each of the criteria which would be useful in a training evaluation.

Relevance. How good is the match between what the training programme contains and the target audience need, in terms of content and process? Is the programme relevant to the overall vision of the organisation?

Effectiveness. Have the people learnt what they were supposed to have learnt? How well have they been able to transfer this learning to the workplace? What may have obstructed implementation of their learning?

Efficiency. What alternatives would there be for achieving the desired change in behaviour or impact? Could the training have been delivered in a more cost-effective way?

Impact. What changes have there been in behaviour and performance? How might training have contributed to this change?

Sustainability. How sustainable is the training programme (cost, logistics, etc.)? How sustainable is the change in behaviour or performance? Will things go back to normal when people forget the novelty of the training programme?

There are of course a lot of questions which could be asked under each heading. For me, one of the interesting things about using the OECD-DAC criteria is how much more systemically the evaluation process becomes. It encourages us to think about the history and process of the training and not just to focus on impact or behaviour or the narrow requirements of a SMART objective. The criteria may have been designed for a different sector, but they have a lot to offer the training world.



Designing better smile sheets: essential reading

Published by in Evaluation ·
I have just been reading a new book by Will Thalheimer called "Smile Sheets", and an excellent read it is too.

If you don't know Will, he specialises in reading academic research about learning and thinking about how this can contribute to training. Training is, of course, an area where there are various strange practices based on mythical facts. One of my favourites is the cone of experience, the claim that we remember 10% of what we read, 20% of what we hear and so on. Claims such as this are presented in training, and because they seem to make some sense get repeated, and slowly becomes fact. This particular topic is one that Will has discussed in the past, and I can recommend a visit to his blog to learn more (www.willatworklearning.com/).

Anyway, he focuses in his latest book on the smile sheet, or to give it its polite name, the reaction questionnaire (a la Kirkpatrick). Although this is the bedrock of most training evaluation activities, the book discusses in some detail the lack of research data to prove that it is meaningful in any way. This is because of a number of different factors. One is that the types of questions often included in reaction questionnaires are often poorly constructed from a statistical point of view, and force the learner into giving a positive response. Another is that surveys conducted in the training environment while the training is still under way are heavily influenced by the fact of being there and because there is no time for reflection on what the training has been about. Finally, there is very little evidence to show that merely reacting positively to a training activity means that there will be learning, which is a fundamental principle in the Kirkpatrick framework, which, of course, underlies much thinking in training evaluation.

Will then goes on to talk about what learning actually means and provides a practical guide to how to design 'smile sheets' which can actually produce meaningful and useful data. It is a most entertaining and illuminating read, and I certainly wish that I had read it before sending my own manuscripts to my publisher!

If you do get involved in any way with training evaluation, buy yourself a copy. At $25, it's well worth it.



Next
Copyright 2015. All rights reserved.
Back to content | Back to main menu