Tags

Given the current economic pressures demonstrating how we make a difference as teachers, lecturers or Learning technologists is even more important. Throughout Further and Higher education at the moment there is a relentless drive to improve results and raise achievement. Coupled with this is (in the UK) is the rise in tuition fees have raised expectations about what learners can expect from their learning experience. If we believe that we make a difference we need to be able demonstrate how we do it.

We need to show how we evaluate what we do. Kirkpatrick (1992) provides a good starting point on how to approach an evaluation project. He suggests four levels of evaluation. Originally developed for training staff, Kirkpatrick’s model can be further adapted to the evaluation of most educational projects:

1. What are ‘participants’ (students/staff) immediate reactions to the event, often obtained with a a simple ‘tick box’ questionnaire. 2. What have participants (students/staff) learned from the event, perhaps also obtained with the end of-event questionnaire. 3. How and how far have participants (students/staff)in the event later used what they have learned. 4. The most important question – what has been the impact of changed practice on student/staff learning?

Baume (2010) paraphrases Kirkpatrick (1992) in much more simple terms:

  • Did people like it?
  • What have they learnt from it?
  • Have they applied what they learnt to their practice?
  • Have the results improved?                                                           

It is the first of these three questions that are relatively easy to address. It is possible to ask the participants if the liked the activity or training they were asked to undertake.  If the participants enjoyed the activity they are more likely to use and engage with it. The second question can again be evaluated, after the event by asking the participants directly. The third issue can also be evaluated by asking the participants, possibly using a questionnaire or interviews or even monitoring changes in their working practices. 

“The final question is arguably the most important but in an educational context probably the hardest to answer”,  As Baume (2010) points out, “how can we be sure that any improvements or increases are attributed to the changes that we are evaluated? Our students results may have increased, which we can measure by analysing and comparing present results with previous years but can these changes be attributed to say, the increases use of the VLE?”

It might be really difficult question to answer when there are so many variables but its something we have to show. This might not result in quantifiable data in terms of improved exam results or increased retention rates, it might just result in an ‘improved educational experience’ that can only be captured by recording the story of the student’s journey as they aquire the knowlede or skills they are learning.

References.

Baume, C., Martin, P. and Yorke, M. (2002) Managing Educational Development Projects: Effective Management for Maximum Impact. London: Kogan Page.

 Baume, D. (2010) ‘How do academic developers make an impact – and how do they know they have done? SEDA Conference Presentation 2010.

Kirkpatrick, D. L. and Kirkpatrick J.D. (2006). Evaluating Training Programs (3rd ed.). San Francisco, CA: Berrett-Koehler Publishers

Useful Links:

 http://www.kirkpatrickpartners.com/

http://astd2007.astd.org/PDFs/Handouts%20for%20Web/Handouts%20Secured%20for%20Web%205-15%20thru%205-16/TU101.pdf