Authors
by Matthew Jensen Hays1 ,Scott Richard Kustes1 and Elizabeth Ligon Bjork2
1 Amplifire, Boulder, CO 80301, USA
2 Department of Psychology, University of California, Los Angeles, Los Angeles, CA 90095, USA
* Author to whom correspondence should be addressed.
Abstract
Performance during training is a poor predictor of long-term retention. Worse yet, conditions of training that produce rapidly improving performance typically do not produce long-lasting, generalizable learning. As a result, learners and instructors alike can be misled into adopting training or educational experiences that are suboptimal for producing actual learning. Computer-based educational training platforms can counter this unfortunate tendency by providing only productive conditions of instruction—even if they are unintuitive (e.g., spacing instead of massing). The use of such platforms, however, introduces a different liability: being easy to interrupt. An assessment of this possible liability is needed given the enormous disruption to modern education brought about by COVID-19 and the subsequent widespread emergency adoption of computer-based remote instruction. The present study was therefore designed to (a) explore approaches for detecting interruptions that can be reasonably implemented by an instructor, (b) determine the frequency at which students are interrupted during a cognitive-science-based digital learning experience, and (c) establish the extent to which the pandemic and ensuing lockdowns affected students’ metacognitive ability to maintain engagement with their digital learning experiences. Outliers in time data were analyzed with increasing complexity and decreasing subjectivity to identify when learners were interrupted. Results indicated that only between 1.565% and 3.206% of online interactions show evidence of learner interruption. And although classroom learning was inarguably disrupted by the pandemic, learning in the present, evidence-based platform appeared to be immune.