Activity Streams, Sensors and the Experience API

Sometime ago the ADL introduced the Tin Can API to the world and brought the notion of recording user interactions with “activity streams” as a way to capture and measure learning data. Tin Can, since renamed the “Experience API“, is intended to replace SCORM and address all of the issues that have been identified as weaknesses in the existing SCORM API. By many accounts this approach succeeds very well. By others, it fails and lacks definition that is needed to be a solid replacement for SCORM.
From my perspective, the Experience API make a great deal of sense. The weakness that it faces, from my perspective, comes from both lack of good use cases and examples and a gross misunderstanding of that the intention of the API is. Use cases and examples will certainly come as the API matures. Certainly one can look to examples such as Tappestry to see the possibilites. However, let’s put aside examples for a moment and focus on the latter part of the issue: Understanding what the API is intended to do.

To my recollection, at some point during the introduction/beginning marketing phases of the Experience, someone explained that an activity stream was what you might find on Facebook. This is completely accurate. A Facebook stream is a great visualization of what an activity stream is comprised of. The problem with this definition, however, does not stem from the actual description, but rather from what more than a few folks in the audience took away from that statement. Somewhere in the translation the Experience API became a way to track learning data on Facebook, Twitter, etc. This idea is far from what the API is meant to do and, because of this perception, has led to some measure of confusion as to what the API will be good for. On top of this is the misguided notion that the capture of all of this data will “automagically” lead to a wonderful collection of data that we can derive all sorts of  fantastic information about the performance of our learners and programs.

One of the criticisms that I often hear about the Experience API centers around the collection of “useless” data. When we look at the need for and use of data collection for learning metrics we can see the benefits fairly quickly. What is often ignored, however, is the design thinking that goes behind the collection so that what is collected can be understood and made sense of. The Experience API does a fantastic job of giving developers a good framework build data collection tools but does not, and hasn’t claimed to, offer a plan to define WHAT to collect or how to analyze it. This is an important distinction to understand with this specification.

The concept of the activity stream is one that I find quite intriguing. Once this concept was clear, it made sense to support the recent Aviation Industry CBT Committee (AICC) adoption of the Experience API as the foundation for the CMI5 (Note: I have served on the AICC board for over five years and the decision to support this was not one that I took lightly). Looking past the notion of social media, the way that the specification proposes to collect data as a stream makes fantastic sense when considered for applications such as games and simulations – the latter of which the AICC has a great interest in. When considered in this light, the concept of an activity stream makes a lot more sense and, perhaps, illustrates what the ADL was originally stating before members of the audience took the Facebook metaphor literally.

This definition of activity streams also brings us back to plausible examples. Just recently (as seen in a post at Hybrid) I worked with Ron de las Alas (@delasare) and completed a prototype of a sensor based system that could track activity via Experience API calls. The initial prototype is really rather simple – it tracks activity when an RFID chip is scanned. Really, it is nothing special on the surface (It will be modified to post potentiometer results as entered by a user once my potentiometer shows up in the mail). What makes is unique and quite interesting, however, are the possibilities. This is a small microprocessor device that is network connected and reporting activity to an LRS. It’s not a mobile phone, it’s not a laptop…It’s a device. While the reporting and interaction now are very simple, it is this same principle that could be applied to, for example, a full motion simulator. In that case the activity stream – all the actions that the crew is taking – can be collected for some very interesting analysis of flight crew performance. It is this scenario that I believe shows the full potential of what the Experience API could accomplish. It also is a clear demonstration of how we can create meaningful learning activities that are far beyond the mundane concept of the “next button” and track them for analysis (this is not a new concept either, the FAA has been doing this for years with AQP and FOQA).

In the end what we have is a new toolset that allows for data capture beyond that of the traditional eLearning media that we have constrained to for the past few decades. More examples of how this works are clearly needed and I can only assume they will come as more people start to work with this specification (assuming that people actually take the time to code innovation rather than use rapid dev tools so that they can continue to collect the same quasi-meaningless data that has been collected to this point). Then we can get past the notion that the Experience API is a learning specification for Facebook and see it for the potential that is has.

2 comments

  1. Pingback: Houston, We Have a Potentiometer… | kris rockwell

  2. Pingback: SCORM in a TinCan? | www.learnilities.com.au

Leave a Reply