CSH in training evaluation - Bryan Hopkins

Go to content

Main menu:

Methodologies
Using Critical Systems Heuristics in training evaluation

The key CSH questions (as generically expressed) and how they can be applied to a training evaluation, in the field
CSH question
Questions to ask in the field
1. What is/ought to be the purpose of what people in the situation of interest are doing?
  • What did the overall programme of learning-related and organisational interventions seek to achieve?
  • With hindsight, how relevant was this intention?
  • What should the overall intention have been?
  • How do training objectives relate to the overall criteria set for the change programme?
2. Who is/ought to be the intended beneficiary of what the people are doing?
  • Who took part in the training?
  • Who indirectly benefited from the training?
  • Is there anyone who did not take part in the training but who perhaps should have?
3. What is/ought to be the measure of success of what people are doing?
  • What are the programme‚Äôs learning objectives?
  • Are these objectives stated in observable, behavioural terms?
  • How do these objectives relate to the overall criteria set for the change programme?
  • What has been done to see if participants have achieved these criteria?
  • Are there any other ways in which the training programme could be seen as having beneficial (unintended) consequences?
  • Are there any ways in which the process of seeing if participants have achieved the criteria could be improved?
4. What conditions of success are/ought to be under the control of the people involved?
  • What resources   have been needed for implementing the training programme (time, equipment, accommodation, etc.)?
  • How well has managing these resources been done?
  • What could have been done better?
5. Who is/ought to be in control of the conditions of success of what people are doing?
  • How has the training programme been designed, managed and implemented?
  • What role have the stakeholders (managers, team members, trainers) played in making decisions about implementing the training programme?
  • Have there been any problems or difficulties in the implementation?
  • If so, what have these been and why have they arisen?
6. What conditions of success are/ought to be outside the control of who makes decisions about what people are doing?    
  • How were the learning objectives for the training programme identified?
  • What role did anybody else play in identifying or agreeing these objectives?
  • What factors influenced the implementation of the training? How were these determined?
  • What role should anyone else have played?
7. What are/ought to be relevant knowledge and skills for what people are doing?
  • What was included within the training programme?
  • Was there anything not included within the training programme but which should have been?
8. Who is/ought to be providing knowledge and skills related to what people are doing?
  • Who decided what the content for the training programme would be?
  • Was there anyone with relevant knowledge who was not involved in the design process?
9. What are/ought to be regarded as assurances that what people are doing is successful?
  • What criteria were used to select the trainer who designed and delivered the training programme?
  • How well suited for delivering the programme did they turn out to be?
  • What could have been done better about procuring this particular skill?
10. What are/ought to be the opportunities for the interests of those negatively affected by what people are doing?
  • What opportunities have there been for people affected by the training (client government counterparts, end users) to provide feedback on the training programme?
  • What opportunities should there have been?
11. Who is/ought to be representing the interests of those negatively affected by but not involved with what people are doing?
  • Which people (if any) from the group affected by the training have been consulted in any way about the design or delivery?
  • Which people should have been consulted?
12. What space is/ought to be available for reconciling differing worldviews regarding what people are doing, among those involved and affected?
  • What opportunities have been provided for consulting with the people affected by the training on its design and delivery?
  • What opportunities should have been provided?
Critical Systems Heuristics (CSH) is a systems thinking tool developed within what is sometimes known as the 'emancipatory' or 'critical' realm of the discipline. As such, it places considerable emphasis on exploring power relationships which influence the system of interest. CSH as a tool was formalised by the Swiss academic Werner Ulrich, drawing extensively on the work of C. West Churchman. The tool proposes that the behaviour of any system, including a system for training people, is influenced from four perspectives:
  • Motivation, who gets what from the system (trainees, organisation, trainer, customers, etc.).
  • Control, who controls how the system operates (managers, trainers, trainees, etc.).
  • Knowledge, what the system contains and how this is decided (learner or trainer, for example).
  • Legitimacy, those excluded by the system.

For each of these perspectives, we need to think about what the perspective means and who is involved, and also what key issues may be. This 4 x 3 structure is usually represented in a matrix for clarity.

Thirdly, the 12 questions in the matrix are then asked twice, firstly what is the answer, then what should be the answer. This step is crucial, as it exposes the power dynamics influencing how the training has been designed and delivered.

This probably seems quite a complicated process, but the table oppositeshows how the 12 basic questions can be used in a training evaluation.

Why is this useful?
Adopting this critical systems thinking approach can help generate a much deeper evaluation to a training programme. The standard, Kirkpatrick-based, approach is defined very much within the boundaries of the organisation (changes in employee behaviour, impact on performance), whereas this critical approach helps us to look more broadly.

Here are some examples.
  • With the benefit of hindsight, were the original design objectives for the programme appropriate? (Motivation)
  • Has the programme will be implemented effectively? (Control)
  • Was the knowledge content of the programme appropriate? (Knowledge)
  • Does the evaluation gather opinions from people who were affected by the training (customers, for example)? (Legitimacy)

Asking these questions has been very useful in evaluations that I have personally conducted. For example:
  • When evaluating a training programme aimed at helping improve the skills of people involved in coordinating voluntary sector organisations, I interviewed people from these organisations (outside the conventional boundary of an evaluation).
  • When evaluating a management training programme, I questioned the validity and relevance of the set of management skills incorporated within the programme.
You will find further information about this and other systems thinking methodologies in my book "Learning and Performance: A Systemic Model", (2017), published by Routledge.
 
Copyright 2015. All rights reserved.
Back to content | Back to main menu