Samantha-Kaye Johnston is a Research Officer at the Oxford University Centre for Educational Assessment (OUCEA).
Samantha-Kaye was formally educated in Jamaica, where she completed her Bachelor of Science in Psychology. In England, she received her Master of Arts in Education and then completed her Ph.D. in Psychology in Australia. Using a cognitive psychology lens, Samantha’s expertise and interest lie at the intersection of education and psychology. She aims to link these areas with evidence-based e-learning technologies to improve teaching, learning, and assessment outcomes.
Samantha has 10+ years of experience in the project management sector, where she has been actively involved in education development initiatives. In 2016, as part of her Project Capability, she founded the Marlon Christie scholarship, which provides a scholarship for Jamaican students with reading difficulties to attend university. As an extension of this project, Samantha founded Reading for Humanity, to elevate the science of reading, the science of learning, and the science of technology within the classroom. Her work is informed by her experience as an advocate and researcher in Jamaica, England, and Australia, primarily within the K-12 sector, as well as within non-governmental, private, community organisations, and United Nations bodies.
She has experience as a University Associate at Curtin University and Teaching Associate at Monash University, as part of their undergraduate and graduate psychology teaching teams. Within this space, she has been teaching and/or assessing various psychology units, including Introduction to Psychology, Developmental Psychology, Science and Professional Practice in Psychology, and Indigenous and Cross-Cultural Psychology.
During her time in the ed-tech sector, and in collaboration with UNESCO’s Future of Education Initiative, she conceptualised and spearheaded Project Seat-at-the-Table (Project SAT), an international qualitative research initiative that aimed at providing primary and secondary school students with the opportunity to provide their input on the future of technology in their education. As an affiliate at the Berkman Klein Centre for Internet and Society at Harvard University, Samantha’s seeks to strengthen internet governance within online learning. In particular, she is interested in ensuring that the rights of young students are protected while they interact within the digital space, including elevating the voices of students in decision-making processes.
Above all, Samantha believes that every child should have the same opportunity to shape their destiny, emphasing that we cannot always build the future for them, but we can build them for the future. Consequently, her goal is to ensure that teachers implement evidence-based pedagogical approaches that will strengthen 21st-century skills, including, critical thinking and creativity, in all students.
By Dr Rebecca Eynon (Associate Professor between the Department of Education and the Oxford Internet Institute) & Professor Jo-Anne Baird (Director of the Department of Education)
Now that the infamous Ofqual algorithm for deciding the high-stake exam results for hundreds of thousands of students has been resoundingly rejected, the focus turns to the importance of investigating what went wrong. Indeed, the office for statistics regulation has already committed to a review of the models used for exam adjustment within well specified terms, and other reviews are likely to follow shortly.
A central focus from now, as students, their families, educational institutions and workplaces try to work out next steps, is to interrogate the unspoken and implicit values that guided the creation, use and implementation of this particular statistical model.
As part of the avalanche of critique aimed at Ofqual and the government, the question of values come in to play. Why, many have asked, was Ofqual tasked, as they are every year, with avoiding grade inflation as their overarching objective? Checks were made on the inequalities in the model and they were consistent with the inequalities seen in examinations at a national level. This, though, begs the question of why these inequalities are accepted in a normal year.
These and other important arguments raised over the past week or so highlight questions about values. Specifically, they raise the fundamental question of why, aside from the debates in academia and some parts of the press, we have stopped discussing the purposes of education. Instead, a meritocratic view of education, promoted since the 1980s by governments on the right and left of the spectrum has become a given. In place of discussions about values, there has been an ever increasing focus on the collection and use of data to hold schools accountable for ‘delivering’ an efficient and effective education, to measure student’s ‘worth’ in ways that can easily be traded in the economy, and to water down ideas of social justice and draw attention away from wider inequalities in society.
Once debates about values are removed from our education and assessment systems, we are left with situations like the one now. The focus on creating a model that makes the data look like past years – with little debate over whether the aims should have been different this year is a central example of this. Given the significant (and unequal) challenges young people have faced during this year, should we not, as a society have wanted to reduce inequalities in our society in any way possible?
The question of values also carries through into other discussions of the datafication of education, where the collection and analysis of digital trace data, i.e. data collected from the technologies that young people engage with for learning and education, is growing exponentially. Yet unlike other areas of the public sector like health and policing, schools rarely have a central feature in policy discussions and reports of algorithmic fairness. The question is why? There are highly significant ethical and social implications of extensive data use in education that significantly shape young people’s futures. These include issues of privacy, informed consent and data ownership (particularly due to the significant role of the commercial sector); the validity and integrity of the models produced; the nature of the decisions promoted by such systems; and questions of governance and accountability. This relative lack of policy interest in the implications of datafication for schooling is, we suggest, because governments take for granted the need for data of all kinds in education to support their meritocratic aims, and indeed see it as a central way to make education ‘fair’.
The Ofqual algorithm has brought to our attention the ethics of the datafication of education and the risk that poses of compounding social inequalities. Every year there is not only injustice from the unequal starting points and the unequal opportunities young people have within our schools and in their everyday lives, but there is also injustice in the pretence that extensive use of data is somehow a neutral process.
In the important reflections and investigations that should now take place over the coming weeks and months there needs to be a review that explicitly places values and ethical frameworks front and centre, that encourages a focus on the purposes of education, particularly in times of a (post-) pandemic.
Kit Double is a research associate at the Oxford University Centre for Educational Assessment.
He completed his PhD in cognitive psychology at the University of Sydney in 2018, where he also completed his B. Psychology (Honours) in 2014. Kit also holds a B. Business from the University of Technology, Sydney. Kit worked as a sessional lecturer and tutor at the University of Sydney. He has also previously worked on the development of intelligence tests with Psychological Assessments Australia and on the assessment of organisational psychology programs with several large industry partners.
Kit has extensive experience working with experimental and individual differences research in both psychology and education. Kit has also conducted several meta-analyses and is interested in quantitative research synthesis. Kit’s previous research has looked at aspects of metacognition including self-assessment and self-efficacy as well as classroom and computerised interventions for working memory impairment and dyslexia.
Kit is currently researching the development of self-assessment and its effect in the classroom as well as the role that self and peer assessment play in self-regulated learning. He is particularly interested in the personal and environmental characteristics that lead to effective self-assessment.
- Australian Postgraduate Award
- Campbell Perry Travel Award
- Psychfest Speaking Prize
- Website: kitdouble.com
- Twitter: @kitdouble
- ResearchGate: https://www.researchgate.net/profile/Kit_Double2
Yasmine El Masri is a Research Fellow at OUCEA and a Hulme Junior Research Fellow in Educational Assessment at Brasenose College. She has been appointed by Ofqual as an External Assessment Specialist and by Qualification Wales as an Assessment Expert Advisor.
Yasmine completed her DPhil in Education at OUCEA in 2015. Her doctoral thesis examined the impact of language on the difficulty of PISA science tests across UK, France and Jordan using different psychometric and statistical techniques including Rasch modelling and differential item functioning (DIF). In 2014, Yasmine received Kathleen Tattersall New Researcher Award from the Association for Education Assessment- Europe (AEA-Europe).
Before coming to Oxford, Yasmine was a science teacher in secondary schools in Beirut and Abu Dhabi. In addition to her DPhil degree from Oxford, Yasmine holds a Master of Arts in Science Education, a Teaching Diploma for teaching science in secondary schools and a Bachelor of Science in Biology from the American University of Beirut (AUB).
Yasmine led various externally funded projects, including a one-year ESRC GCRF Fellowship in 2017, during which she collaborated with a local NGO in Lebanon producing open source interactive science tasks in multiple languages for underprivileged students in the country, including Syrian refugees. She is currently leading a study within Project Calibrate, a three-year project funded by the Wellcome Trust and the Gatsby Foundation aiming to enhance summative assessments of practical science in England. She is also a co-investigator and a project manager of a project funded by the International Baccalaureate Organization focused on Critical Thinking in the Diploma Programme.
Item difficulty and demands, language in assessment, science assessments, international large-scale assessments, critical thinking, comparability of assessments across cultures