From the genesis of the learning labels technology, I fielded the question: “Who verifies the accuracy of the learning labels?”. This is clearly something I thought of in the early stages of development. The learning gain on learning labels is Skill Points®. This is a measurement calculated when a label is created, based on difficulty, intensity and other factors. Already working to conduct a study to build the next generation of this learning gain, Skill Points® 2.0.
Before getting into who participates in the verification process, another good question is: “How does someone begin verifying the learning taking place?”. Learning labels provide a good framework to test for accuracy. Reviewers must have access to the resource, so they can consume the resource themselves.
Simply review each skill and level of difficulty, methods applied, and standards referenced on the label and make sure the reference is accurate (like checking each end point, as shown in graphic). With more general skills, like critical thinking and problem solving, look for the methods behind the skill.
A teacher who creates a learning label might have incentive to skew Skills Points® - give learners ‘extra credit’ for their learning. With pressure to teach to standards, a teacher might also inaccurately reference standards.
Similarly, a game creator or publisher of education resources might do the same. Learners choose and might buy a product based on how many Skill Points® are earned.
With the first audience, a peer review is a good option. If a teacher creates a label and another teacher uses the same one (called cloning in the app), then they endorse the accuracy of the label. The higher number of teachers using the label, the more acceptance among teachers.
Another option is to work with the providers of standards referenced on the label. While integrating the ISTE standards, I spoke with their representatives on the phone. They immediately appreciated the ease of which standards are assigned and represented on the labels but questioned the validity of some third party portraying their standard. My suggestion was to allow them to participate in the process. Let ISTE people review a label and the task / experience and verify it is accurate.
My business proposal includes a verification process as part of the system (as referenced in a patent). (To be clear, this is not built yet and there are not personnel on the team to conduct this type of review process yet.) But I see this feature being a potential source of revenue. I propose putting a stamp of approval on the label itself. (This is analogous to how the FDA puts a stamp of approval on a nutritional label. And an organization like the FDA – a government or non-profit – might participate in the process.)
I am intrigued by using blockchain in the verification process. There is already movement to use blockchain with credentials and badging in higher education. Plug learning labels in the process as an upfront display for the learning expectations. When these expectations are met, then provide the credential. Also, I see the potential of blockchain in games and VR experiences where there is less oversight by education institutions.
Finally, it is worth differentiating the verification process is on a more discrete level than something like an accreditation. (Though putting together a series of them, might capture the learning taking place in courses and subsequently a learning program.) The process might be like the credentialing of a badge. But a learning label is meant to represent an assignment, task, or experience. It might refer to a book, activity, game, VR experience or segment of one of them. I can think of only a handful of organizations that review learning resources at this level.