Easy as ABC

Image from FlickR Courtesy of dklimke

I read a brilliant piece the other day from Leo Babauta.  I put it on Twitter and it was soundly retweeted.  The author quite succinctly made the suggestion that the measurements we take and make are focused on parts of what we are doing and don’t necessarily relate to context.

More importantly, he suggests relaxing about measurement and calls it untracking.  I’m going to start using that word as I think it describes what I want to do in L&D.

I’ve been talking about this obsession with measurement for a while now and my words were nicely summarised by Paul Webster a while ago as ‘we spend too much time measuring the Tiny rather than standing back and looking at the Big’.

Leo put this neatly in terms of parenting:

do we measure all the activities we do as parents, so we are motivated to improve and keep doing it?

Let’s apply this to the role of L&D.  Are we motivated to improve our practice and that of our organisations  to see if there are better ways of doing things, or by the desire to create metrics to justify our busyness?

I ran a webinar for Learning Pool this week about our learner mindset presentation and yet again, the same question came up about how we measure informal learning.

The answer’s the same…I don’t think we can.  More importantly, I don’t think we should.

It’s not our role to try and capture and categorise every snippet of new knowledge and behaviour that an individual uses. This is why I am very scared about how the new Tin Can API will be sold to businesses.  I have a fear it will be a tool that measures everything but understands nothing about the value of its content (like most LMS I hear some of you say).  I believe the manager’s role is to measure the performance in the workplace, yet there seems to be a desire to retain this measurement within L&D to ‘prove’ it was our work that created the difference.  What this means is:

  1. We absolve managers from taking responsibility for measuring their staff’s development
  2. We create complex metrics
  3. We create the learning objectives for the performance support

Why do we create learning objectives?  Surely these should be created by the learner for us to build support around?  Or do learners create objectives to ‘fit’ in with our provision, because, we are the experts after all?

Remember that as we come up to Christmas; make sure you establish learning objectives for all the toys you buy young children. I mean, how else will they learn how to use them?  And more importantly, without those learning objectives, how will you know how much they’ve learnt, the depth of that learning, and whether you’ve achieved a decent ROI?

Comments, as always, very welcome.

18 thoughts on “Easy as ABC

  1. Quite, Andrew. The notion of having to measure ROI on the investment in our children’s play is absurd. We know learning will emerge. There’s a different attitude in the workplace. There’s an obsession to control, to know, to be certain and to be able measure everything. I read a Russell Ackoff quote recently, which I’ll have to paraphrase; “If you have to measure change, it hasn’t happened.”

    Like

  2. I’ve been facilitating a weekly 30-minute mindfulness session since January. Participants don’t have to book and there are no objectives, except those the learners bring. There’s no planned end date to these sessions but we’re getting new people every month. Taking learning for a walk, I call it. One of my most rewarding gigs in 20 years training.

    Like

  3. I get what you’re saying but I’m not in agreement with most of the message here. I’m in disagreement, in part, because you’re focusing on one use-case of the Tin Can API while ignoring others in your argument.

    The article and many of the comments seem to be making the case that measurement serves little or no value. I know this isn’t exactly what you meant as you’re making this argument in the context of informal learning. And I agree with that point, in most contexts, depending on WHO you’re talking about. I think for informal activities and experiences, measurement can have very little value to the organization. I do think this level of measurement can be valuable to the individual. Feedback loops *can be* tremendously helpful to individuals within informal / individual endeavors. But not if measurement itself is the goal.

    I’d argue that there are two value propositions / opportunities that an information system could provide where proficiency progression and performance are concerned. One of these value propositions is for the individual, the other is for the organization. The latter benefit is likely to be greater for larger organizations. My organization is military (coincidentally, this standard is designed first for military organizations). We conduct periodic occupational analyses to assess whether we’ve designed and trained large vocational groups optimally. We generally do this by pulling folks in and asking them what they do, reviewing LIII survey data as well as other data sources from disconnected systems. This invariably results in some positive outcomes but it’s fraught with bias. If we’re able to capture experiences, as a nature of systems we have in place to support work, we would be more responsive to the needs of a VERY large organization and might actually be able to validate and employ other employee support systems. The potential (it’s not proven) of the Experience API to improve the flow of information and close feedback loops within very large vocational groups is seriously huge, in my opinion. Could it fail? Sure. But I wouldn’t blame a language or a technology for that failure.

    Depending on how you look at it, I think these benefits live in the formal arena (individual, team, and organization) but there are still *some* opportunities to provide value for the individual in an informal contexts at larger aggregations / milestones of accomplishment.

    In benefit to the individual, for example, tracking accomplishment of goals and, in some cases, sharing accomplishments can be very helpful TO THE INDIVIDUAL. I’m learning something and I want to put myself on a track that increases the probability that the pace and activities align with my goals, having a feedback loop to continually map my progress can be really useful. Not in all cases and probably not at granular levels (Steve read this, Steve tweeted that). I’ll agree that at some levels, records become noise (read, tweeted). At the milestone level of aggregation, tracking progress can be critical to the acquisition of some skills. Organization isn’t always evil:)

    Should informal activity recording be overt and onerous in every case (or, arguably, at all)? Hell, no. Does my organization really care about my informal stream? In most cases, definitely not. Should there be an option for me to turn on a background capture for my own use if the technology supports it? Hell, yes. Should I be able to aggregate experiences to prove equivalency or superiority when compared with a formal education experience? I think so. Do opportunities currently exist to turn on this background capture and communicate it in a meaningful way? No. Does the Tin Can API create this opportunity? Yes, I think it does.

    Informal experience tracking is only one of many potential use-cases for the language / technology standard. I’d be hesitant to broad-stroke the entire standard with a negative “that’s a waste of time” without acknowledging how the standard could (not will, could) help to reduce friction in the formal space and provide other values to the organization in some contexts. Tin Can isn’t being sold exclusively as the panacea to informal learning anywhere I’ve seen.

    This is a piece of technology. It’s just a language built to enable other things to happen. It provides an opportunity to accomplish things that we weren’t able to do before. The standard provides the opportunity to close feedback loops and give folks adaptive guidance that, when combined with a balanced human system, could be great.

    Will it be used in counterproductive ways? Without a doubt. Will it also enable some amazing stuff? Yeah, pretty sure it will. I’m willing to give the technology standard the benefit of the doubt.

    Like

  4. Thanks for your comment Steve.

    This piece isn’t specifically about TinCan; it’s about the practice of measurement by L&D, using metrics that L&D creates, to sell what L&D does with little relation to the business performance.

    It’s about L&D taking charge and wanting to control all learning experiences. There will be some who see the TinCan API as a way of measuring in even more detail, providing more detailed metrics that do not contribute to the whole.

    It is about untracking, focusing on the big rather than the tiny, not being concerned with records of activities that may (or may not) be learning, that, without reflection, are information rather than knowledge.

    My fear is that it will be promoted to appeal to the command and control mentality of organisations. It will be brought in to help manage, but will end up being used to direct. I can think of managers I’ve worked with in the past asking their staff to record activity for recording’s sake.

    Adoption on those terms will create a resentment that some LMS now have as the system drives the activity, rather than to support it.

    It’s good to have the debate and I appreciate the time you’ve put in to comment.

    Like

  5. I can dig that sentiment, Andrew.

    I can only hope that better information flows mean less desire or need for control. I have high hopes for Tin Can in my org. It’s only one piece of a larger system of other supports I (we) think could have a big impact on proficiency development and awareness at the individual, unit, and organizational level. I’m in L&D (sort of, if you lump / relate Performance Tech into L&D) and it’s really not about control for us. It’s about responsiveness, providing services where folks need them, and getting the heck out of the way when they don’t.

    Ultimately, it’s about getting the mission(s) done. In my org., we’re driven by readiness that revolves around the mission. We have pretty good baselines for readiness and pretty good indicators / systems to path to levels of readiness. These systems only track base level competency and don’t service or support progression through higher levels of proficiency. As a former enlisted guy, this is something that always bugged me. There’s a chance that this baseline could be extended to paint a picture of more than just competency using consistent performance metrics.

    I have a blog post coming up that should make the context of our potential use. This includes

    – leveraging the standard for capture of distributed experiences onboard deployed units (we do LOTS of structured OJT that requires validation / observation of task completion). As we have a relatively high op-tempo, we also qualify members as shadows on operations for many of our unit training requirements. We have a loose capture on the types of tasks folks have completed but there are gaps in value that could be filled with a better mechanism. We have a go/no-go system of performance quals (show me how you’d do it) so recognition of proficiency levels beyond “can do it” aren’t often captured within the formal system from unit to unit. Capturing task performance excellence can help to inform coaches and supervisors to encourage development of strengths that might not have visibility every moment of every day. In high op-tempo environments where everyone has another primary job, any help we can provide to paint that picture could make a world of difference.

    – looking forward to “built-in” stream generation for things like equipment and electronic systems that enable visibility of a picture of performance that cross-walks to logistics “behind the scenes” automatically. What’s the real cause of equipment failures? If someone already solved a problem how can we make that solution more visible to others down the line?

    – career path planning and competency model validation / linking to operations. We have a heap of competency data that simply aren’t employable because we don’t have a system to match operationalized proficiency to the work taxonomy. Can’t connect, can’t prove/improve. Can’t prove /improve, can’t employ. Can’t employ, can’t enjoy the benefits. Vicious circle.

    I also think that it can’t hurt supervisors and members to have a way to surface accomplishments using an authoritative formalized “at the source” common system. Most of our supervisors are also operators and technicians. A helpful tool won’t hurt good supervisors (they’ll use these streams to start conversations, perhaps at a shorter interval than the evaluation period) — I doubt it would hurt bad supervisors either. Like many eval systems, ours suffer from marking inequity. Encouraging shorter periods of feedback supported by actual performance data could improve things significantly. Won’t know until we try.

    Even in our distributed organization of delegated decision-making, we make policy, training, and resource decisions using a very large human structure. We need systems that support good decision-making when decisions need to be made. It seems like a system that improves information flow and paints the performance picture on what can seem like a mostly blank canvas with beautiful patches and random spatters might be worth a try:)

    Like

  6. Really enjoying your blog Andrew 🙂 I confess I don’t know enough yet about the technicalities of Tin Can API. What I am familiar with is the concept of it that’s being discussed across the industry. There certainly seems to be a perception that it’s some kind of saviour or great white hope. And whilst (on the surface at least) it seems to have many redeeming qualities, I think we have to be careful about anything that appears to have all the answers. To break the obsession with measurement of activities as opposed to impact on performance, we need a change in mindset not just a change in the technology.

    Like

  7. Thanks Steve – I can relate to the need for proficiency in terms of your organisation from my work with the fire service here in the UK. I’ll look out for your blog post.

    Thanks Kate – it’s those discussions that suggest it’ll be a panacea (as Steve suggests it won’t) that are disturbing me.

    Like

  8. I’m quite excited about the capacity of the Tin Can API to be used by learning professionals to acknowledge the informal learning that takes place all the time. Do I have strong concerns, though, about the data that will be gathered? Yes.

    There’s a real chance that organizations will miss the point and think that by merely capturing a substantially broader set of data that they’re doing a better job of measuring learning. If the data you capture is isn’t curated and interpreted properly, or if you’re capturing irrelevant information, then you’ve accomplished very little.

    Just like every learning innovation (or ANY innovation, for that matter), there are people and groups that will believe that they can just turn that tool on and it’ll just take care of everything on its own. This oversimplification is just going to end up with frustrated organizations that have a pile of new data that they don’t know what to do with. I also have to imagine there will be some cases where employees will feel this is yet another way for their workplace to monitor them.

    These problems aside, I still feel generally optimistic about ways to acknowledge informal learning. At this point I think the best thing we can do is make sure that the people around us truly understand the strengths and limitations of tools like Tin Can, so that they don’t ever make the mistake of seeing them as something that “has all the answers” as Kate put it.

    Like

    1. I couldn’t agree more with your comment about the tool leading the practice. That idea is a huge problem in educational technology right now. A shiny new thing comes out and people/organizations design around it rather than really analyzing the technology and determining when and how it actually makes sense to use it.

      Take iPads. They can be a powerful tool for learning, especially when you train educators on how to use them effectively and what their pros and cons are. Unfortunately, we’ve all seen instances where an organization got excited, bought a ton of iPads, just shoved them in to a classroom, and expected magic to happen. And, of course, they end up disappointed.

      To me, Tin Can feels like yet another potentially useful innovation that educational technology experts are going to have to actively advocate for responsible use of. I foresee us also doing a great deal of adjusting people’s expectations of what it can actually accomplish and educating companies on the potential privacy issues that Tin Can brings up.

      Like

  9. Great post and discussion – thanks for prompting it, Andrew. My thoughts here are with the learner and the usefulness of being able to capture and share activities that show others – managers, employers, friends, family, peers – the type of learning activities you are undertaking. Could this be of benefit to the individual? Possibly – for example, a colleague who blogs on their specialist area of work – or their hobbies for that matter – currently may get little or no recognition for the fact they are taking time to share ideas and reflect on their thinking and build a network around that. If that activity could be recognised (LI profiles, Mozilla badges or however this might work) then I think there would be a lot of value to the individual. As for what organisations could do with tech such as Tin Can, I totally agree that the danger is to fit tech with current values, culture and thinking. I think it gets interesting when orgs put that to one side and can be freer and more radical in how they could use it.

    Like

Please comment...

This site uses Akismet to reduce spam. Learn how your comment data is processed.