Following on from my previous post about taking an evaluation approach to selecting the best set of classroom tools, here is another activity that will help you get traction … it probably occurs FIRST in all reality – and is a strategy to better understand their needs. Notice here that I’m not talking about new technologies – that is something to do after the session.
It’s a way to set up a piece of technology that you introduce later — to disarm those who enter the session with a certain view. Draw your bow, provoke thought around current practice.
Provide these as a set of questions on paper.
Ask teachers to read it and spend 5 minutes answering them as individuals. (This avoids immediate group bias and getting side-tracked into conversations they’d rather have).
1. Think about an evaluation of work that you have recently carried out, or are planning to do.
2. Ask them to select a pathway for the evaluation to determine it’s purpose.
a) because you wanted to learn about your own practice
b) because you needed to be accountable.
3. Highlight the characteristics that best describe the evaluation
* Evaluators include staff, students and other stakeholders (parents, executive)
* Examples used in the evaluation are randomly selected to represent the whole
* Evaluation has an emphasis on finding out reasons for success and failure
* Evaluators are independent perhaps external to your classroom
* The aim of evaluation is to improve future activities
* Used a mix of data – qualitative and quantitive
* The emphasis of the evaluation was on qualitative data
* The evaluation was carried out at the end of the project cycle
* The examples are selected because they illustrate a point (what you were looking for)
* The examples are selected because of a potential to transfer the wins and loses to other things
* The aim was to compare current and past activities
* The emphasis is on how successful/unsuccessful the project has been
* The evaluation is carried out during the project cycle – as part of the planning cycle
* The evaluation data is collected once, at the end of the project cycle.
4. Pair with someone; and decide which of the above characteristics best describes evaluations carried out for the purposes of accountability – and which best describes evaluations for the learning process. Draw two columns (accountability and learning). Allow them to only place a maximum of TWO items in BOTH columns (you will have some who hedge their bets).
5. Are the characteristics which describe your evaluation all in the same column or divided between them? – Ask for feedback and reasons (allows people to talk/vent).
If we are interested in introducing technological interventions that bring benefits to a child’s learning – then we are interested in the approach described in the learning column. There is a cross over, but that should not be seen as a conflict.
We should be emphasising the learning column’s potential – and use these characteristics as criteria when looking at technology.
Now let’s gun-down some of the current practice.
Form people into groups of about 6. Ask them to take their tables and start to order the characteristics from most to least important. Get them to draw up a sheet and stick it on the wall. This allows everyone to see how the cohort sees evaluation itself.
At this point – you should be happy – you have just found out a lot about your environment; what they see as important and will have a clear idea about who is leading your table conversations. Observation here is key – figure out who is most likely to support you if you invest further time.
Now ask them to do something they didn’t see on the agenda.
By each of the items on the sheet – ask them directly – “which ICT is best used to help you do that?”
ie Which ICT do you use because it illustrates a point? – an example might be – scanning student work samples.
Now you have your research – you can now go away and find some tools that will allow them to do what they say they are doing – perhaps better use, perhaps entirely new. In the above example; you might introduce a digital camera as a way of imaging student work – during the learning, not just scanning work at the end.
So the action step for you – as the educational developer; review and suggest tools that present characteristics most likely to deliver the things everyone put in the learning column.
Will they stay the same – or change? What can you do with available time/resources – what are the priorities – for learning.
The great thing here is that you are taking an evidence based approach … you have something ‘real’ to talk about.
Now track down people and issues one by one – start to work directly on their practice. Avoid large groups – and make sure you schedule time(s) with staff. Perhaps you can tie this to individual performance reviews; time needed to evidence professional development; or some other organisational requirement. The more you make it appear to be part of their job – the more likely you are to see them adopt or at least consider what you are suggesting.
Make sure you write up your workshop; and give it to everyone who attended for feedback and comment. This is really important. More important – write a further report for the executive to make recommendations – based on your new evidence. Getting the backing of leaders is critical. If they don’t take the time to read it and discuss it with you — then think hard and long about beating your head against the wall in the future. Set it up right and you become empowered – don’t allow your efforts to be a time-filler or nice to know. Make it matter to everyone.