Evaluation of adaptive feedback in a smartphone-based serious game on health care providers’ knowledge gain in neonatal emergency care: A randomised experiment
24th February 2020 : 12:45 - 14:00
Research Group: Quantitative Methods Hub
Speaker: Timothy Tuti Nganga, Department of Education, University of Oxford
Location: Department of Education, Seminar Room B
Convener: Lars-Erik Malmberg
While smartphone-based emergency care training is more affordable than traditional avenues of training, it is still in its infancy, remains poorly implemented, and current implementations tend to be invariant to the evolving learning needs of the intended users. In resource-limited settings, the use of such platforms coupled with gamified approaches remain largely unexplored and under-developed. This is despite the lack of traditional training opportunities and continuing high neonatal mortality rates in these settings.
The primary aim of this randomised experiment was to assess the effectiveness of offering adaptive versus standard feedback through a smartphone-based learning intervention on healthcare providers’ knowledge gains when managing a simulated (gamified) medical emergency. A secondary aim was to assess the effects of learner characteristics and learning spacing on individual learning with repeated use of the game using the secondary outcome of individualised normalised learning gain.
Methods and analysis
The experiment was aimed at healthcare workers (physicians, nurses and clinical officers) who provide bedside neonatal care in low-income settings. Data were captured through an Android smartphone-based gamified application installed on the study participants personal phones. The intervention -which was based on successful attempts at a learning task – was adaptive feedback provided within the application to the experimental arm while the control arm received standardised feedback. The primary endpoint was completion of the second learning session within the application. 572 participants were enrolled between February 2019 and July 2019 of whom 247 (43.18%) reached the primary endpoint. The primary outcome measure was standardised relative change in learning gains between the study arms as measured by the “Morris G” effect size. The secondary outcome was participants individualised and normalised learning gains.
Findings and discussions
The effect of adaptive feedback on healthcare providers’ learning gain was found to be g = 0.09 (95% CI: -0.31 – 0.46, p-value = .474). In exploratory analysis, using normalised learning gains, when subject-treatment interaction and differential time effect was controlled for, this effect increased significantly to 0.644 (95% CI: 0.35 – 0.94, p-value <.001) with immediate repetition which is a moderate learning effect but reduced significantly by 0.28 within a week. The overall learning change from LIFE use in both arms was large and may have obscured a direct effect of feedback. There is a considerable learning gain between first and second rounds of learning with both forms of feedback and a small added benefit of adaptive feedback after controlling for learner differences.
I will also discuss the role that the knowledge tracing approach used in delivering adaptive feedback played, whether it worked, and what other alternatives might have been better suited. Findings from this work suggests that linking the adaptive feedback provided to healthcare providers to how they space their repeat learning session(s) may yield higher learning gains. Future work might explore in more depth feedback content: in particular, whether explanatory feedback (why answers were wrong) may enhance learning more than reflective feedback (hints about what the right answers might have been or how wrong were the learner’s responses).