Using the OECD-DAC criteria for training evaluation - The systems thinking and training blog - Bryan Hopkins

Go to content

Main menu:

Using the OECD-DAC criteria for training evaluation

Published by in Evaluation ·
I was recently reading through the specifications for a training evaluation project and was somewhat surprised by the absence of any reference to Kirkpatrick and his famous framework. Instead, the criteria for the training were framed around those used in the development and humanitarian sectors, commonly known as the OECD-DAC criteria (from the Development Assistance Committee of the Organisation for Economic Cooperation and Development). These criteria are relevance, effectiveness, efficiency, impact and sustainability.

Although I am familiar with these criteria from my work in these sectors, it is interesting to reflect on their absence from the training world, where criteria for training effectiveness come from objectives specified at the beginning of the training design process, and are often structured around the SMART concept. Although training of trainers courses often recommend the use of SMART objectives when specifying training, I have always found the structure somewhat unsuited to training. According to Wikipedia, the first use of SMART is attributed to George T Doran in 1981, where they were put forward as a way to set goals for management activities. This isn't the place to get into the pros and cons of management by objectives, but while they may have been suitable for this purpose, in my opinion they just don't work for training.

So where I have been able to, I have always tried to use Robert Mager's three-part objectives structure: performance (the observable verb), conditions (where the performance is carried out) and criteria (measure of success for the performance). This is much more usable for designing training, but it is still very difficult to use this structure for defining some overall success criteria for a training programme. In my time I've seen many training programmes designed around objectives (both SMART and Mager-type) where it would be impossible to draw any conclusions about success, because of the complexity of real-world performance and measures of success. Often this is down to training objectives being written in such a way that suggests the aim of the training is to cause major behavioural or organisational changes. The writer of the objectives may realise that other strategies need to be implemented to support training, but this may not be explicitly stated in a comprehensive training needs analysis. But if the wording of the training objective is not realistic as far as real-world behaviour is concerned, the objective cannot be used as the basis for an evaluation.

Which brings us back to the criterion problem. I think the five OECD-DAC criteria have much to offer. Here are a few questions relevant to each of the criteria which would be useful in a training evaluation.

Relevance. How good is the match between what the training programme contains and the target audience need, in terms of content and process? Is the programme relevant to the overall vision of the organisation?

Effectiveness. Have the people learnt what they were supposed to have learnt? How well have they been able to transfer this learning to the workplace? What may have obstructed implementation of their learning?

Efficiency. What alternatives would there be for achieving the desired change in behaviour or impact? Could the training have been delivered in a more cost-effective way?

Impact. What changes have there been in behaviour and performance? How might training have contributed to this change?

Sustainability. How sustainable is the training programme (cost, logistics, etc.)? How sustainable is the change in behaviour or performance? Will things go back to normal when people forget the novelty of the training programme?

There are of course a lot of questions which could be asked under each heading. For me, one of the interesting things about using the OECD-DAC criteria is how much more systemically the evaluation process becomes. It encourages us to think about the history and process of the training and not just to focus on impact or behaviour or the narrow requirements of a SMART objective. The criteria may have been designed for a different sector, but they have a lot to offer the training world.



No comments


Copyright 2015. All rights reserved.
Back to content | Back to main menu