Lessons from preschool: gamification
‘Gamification’ – making a business activity into a sort of game - is very popular as a way to make training more engaging, as well as to test business planning and readiness and raise awareness of issues.
In our business – information security – we use ‘tabletop
scenario’ games: playing out, with a team, the response to some kind of
fictional but realistic information security incident. Everyone agrees they are
a great way to learn, to test and practice responses: but nobody really has any
systematic way to measure their effectiveness – to assess their actual business
value.
In my semi-retirement I have continued my engagement with
universities by supporting student projects in our field. This year I’m pleased
to be working with a student on a
project I proposed for his MSc Cyber Security dissertation. The idea is to come
up with some systematic way to measure the business value of tabletop cyber
scenario exercises.
The problem with play is that everyone enjoys it but it
isn’t easy to measure its value.
A bit of background. Scenario games grew out of wargaming.
Developed in Prussia in the 1860s, wargames were detailed battlefield scenarios
that army groups would work through: generally floor based rather than
tabletop, with battlefield maps and game pieces to represent troops and
equipment, but similar in concept to today’s business games. The aim, then as
now, was to provide a ‘safe place to fail’ – where teams could face challenges
and work out better responses, practice coping under some stress, and build
team communication and cohesion. Comprehensive rule books were developed for
each scenario so a moderator could look up the result against each critical
decision point and give the team the bad news. The problem with such games
became apparent very quickly though: they are very costly and time consuming to
set up, interrupted by the pauses to consul the rule book, and rigid so that
the game can’t easily be re-used with a slightly different scenario. So in 1870
the Prussians adopted ‘free wargames’: scenarios clearly defined but simple,
and instead of the rule book a practitioner with recent practical experience
acting as moderator, deciding the outcome themselves at decision points rather
than consulting the book. This allowed the games to be set up with less fuss
and cost, run more readily, and adapted quickly to modified scenarios. There’s
a very good article by John Curry if you’re interested in the historical
background.
What we’re talking about aren’t war games (we don’t like the
word ‘war’ in commerce so much) but what John Curry calls Professional Wargames
– similar in style but aimed at specific business or government fields not
necessarily military. These are a more modern idea, but face a similar
evolution: the over-planned scenarios with big rule books proved unwieldy to
moderate and ponderous to play, so now we use experienced people to design and
moderate simpler cleaner scenarios.
The issue with these simpler, more adaptive games, is that
it is hard to come up with a systematic evaluation of how effective they are.
My initial bias in proposing this project, me being
mathematical, was to aim for a system of numeric scoring – maybe based on
subjective interviews or questionnaires, before and after, for example. A bit
like the MSc module assessment criteria we used to formulate when I was leading
MSc students at university, as a guest lecturer: listing learning aims, how
they would be assessed, and what specific test would be applied. The trouble
is, play isn’t like that.
And designing a module for an MSc course is an intensive,
long term, process, with external moderation and checking and validation: we’re
back to the big rule book and an unwieldy system of development and
implementation. To preserve the freedom of the free wargame we need a quicker,
more adaptable method.
But because of my odd career direction since retirement – as
my children took over the business so I went part time into the other family
business as a preschool teacher - I get
quite a lot of exposure to the idea of play as learning : and crucially to ways
of measuring its effectiveness.
In England, the idea of learning through play was
popularised by Samuel Wilderspin (a contemporary of Froebel who is now better
known). His idea was that play is an essential way to learn not just
intellectually but also emotionally:
·
“it is through the freedom of play, with their
peers, unmoderated by adult intervention, that children learn to develop
intellectually and socially and to build relationships”
That idea applies just as well to adults learning and
developing skills and working with a team: and in incident response the
emotional aspect is just as important because these incidents can be very
stressful.
Wilderspin’s ideas are repeatedly opposed by those who think
we should all sit still and shut up while someone talks at us, but they survive
in preschool: perhaps because getting 28 three-year olds to sit still and shut
up isn’t as easy as some people think. So in preschool we teach mainly through
play. It isn’t unstructured though, because we have clear learning and
development objectives (mandated by government, no less…) and, importantly and
relevant here, a duty to observe and report on the progress towards those
learning goals. So play areas may be set out with ideas in mind as to what sort
of learning they might promote: the sandpit perhaps having weighing scales and
different sized cups with the idea that children might start comparing
heaviness and volume. An adult practitioner might prompt such enquiry by taking
part - “I wonder if this will fit in that..” – so in effect setting a scenario
and inviting them to take part.
This is very similar to the tabletop scenario – though in
preschool we have firm guidelines, clear learning goals, regular assessment of
our practice and insight through being observed ourselves or undergoing tests
and training, and of course regulation through Ofsted. In information security
the learning goals and development stages are much less clearly defined: and though
we can find templates in for example ISO standards, or the Cyber Security Body
of Knowledge, it is apparent that tabletop scenarios rarely if ever reference
any such formal structure as learning goals.
The challenge, at preschool and in tabletop scenarios, is
that the play develops in unexpected ways: and in fact is supposed to, as this
isn’t a paper exercise in walking through a prepared response but a slightly
realistic and deliberately somewhat stressful enactment. James (my son and now
MD) likens a paper exercise to a fire drill – an orderly working through of a
pre-planned process – and a tabletop game to a fire drill where someone sets
off a smoke bomb in the stairwell: challenging with a perhaps unplanned-for
event, and placing the players under a certain level of stress. So we can’t
just tick off the following of each point of procedure, we have to adapt to
what the players do in reaction to unanticipated events.
How, then, if we don’t know how the play will progress, can
we assign clear leaning goals and reference those to some agreed systematic
framework of development stages?
At preschool this is called ‘In The Moment Planning’: the
teacher (or game moderator) spots ‘teachable moments’ where something of value
may be learnt, and may react by prompting along that direction so that a
learning aim is achieved. This isn’t easy: in fact it is really hard. The
teacher (or moderator) must be very well versed in the learning goals of the
framework, and be able to fit them into the given activity as opportunities
occur. Not only that but they must also record whether and how the goal was
attained: what their intention was, how their implementation went, and how far
the learning goal was attained by the participants. That’s a lot of knowledge
being leverage, and a lot of focused attention being paid to what is going on
and why. Also, the teacher isn’t just sat watching in case something random
comes up: like the scenario moderator they have developed a clear plan, with
identified possible learning goals and probable reactions to decision points –
and possibly this is all in their head because they have the experience to do
that. So the outcomes aren’t just random, they are to some extent pre-planned
but not rigidly so, and are cross referenced to learning and development goals.
One more complication: children of different ages are
expected to attain different learning an development stages. Likewise in a
commercial game an information security specialist might be expected to attain
a different level and kind of awareness than a non-specialist: an executive
might be expected to have a different set of motivations and actions than a
staff member, because of their direct accountability to shareholders, and so
on. In Early Years this is addressed by the learning and development frameworks
being staged – referenced to the age and prior development – and such a
structuring of levels might be equally valid for adults in commerce and
government.
At preschool, also, we aren’t just left alone to do this:
Early Years platforms typically offer online observation recording that can be
linked to any one or more of different structured learning and developmental
stage schemes – the government’s is Development matters, the sector’s own is
‘Birth to 5 Matters’ and there are several others that are widely used. Such
platforms are usually accessed by tablet so that observations can be recorded
‘in the moment’ or shortly afterwards. Structured spontaneity.
This, I think, offers a model for measuring the
effectiveness of tabletop scenarios:
- A clear framework of desired development and learning stages
- Learning goals for the scenario, referenced to that framework
- ‘In the moment’ observations
- Clear recording of outcomes, referenced to the learning goals and framework
- A quick simple tool to record and categorise observations
That at least is where my thinking is at the moment – but a
student project belongs to the student, not to me, so I am interested to see
where that leads.
Comments
Post a Comment